text
stringlengths
64
2.99M
Looking at the source for UsernamePasswordAuthenticationToken, /** * This constructor can be safely used by any code that wishes to create a * <code>UsernamePasswordAuthenticationToken</code>, as the {@link * #isAuthenticated()} will return <code>false</code>. * */ public UsernamePasswordAuthenticationToken(Object principal, Object credentials) { super(null); this.principal = principal; this.credentials = credentials; setAuthenticated(false); } /** * This constructor should only be used by <code>AuthenticationManager</code> or <code>AuthenticationProvider</code> * implementations that are satisfied with producing a trusted (i.e. {@link #isAuthenticated()} = <code>true</code>) * authentication token. * * @param principal * @param credentials * @param authorities */ public UsernamePasswordAuthenticationToken(Object principal, Object credentials, Collection<? extends GrantedAuthority> authorities) { super(authorities); this.principal = principal; this.credentials = credentials; super.setAuthenticated(true); // must use super, as we override } Your code only seems to be setting the username and password. To authenticate, you need to pass in 3 parameters - username, password and a collection of granted authorities. If you have this, pass it. Else, pass null as a third parameter. Hope this works for you.
To your second question first: there is no way to break/stop a promise chain, unless your callback throw err like doAsync() .then(()=>{ throw 'sth wrong' }) .then(()=>{ // code here never runs }) You can simply try below demos to verify the second callback still runs. doAsync() .then(()=>{ res.end('end') }) .then(()=>{ // code here always runs }) doAsync() .then(()=>{ return; }) .then(()=>{ // code here always runs }) To your first question: to use the second parameter in then(), which means reject. And each time split the logic to two parts. var p = new Promise(function(resolve, reject) { return ad.auth(username, password).then(()={ // check if 401 needed. If needed, return reject if (dont needed 401 in your logic) resolve(username) else reject({ msg: 'authentication has failed', status: 401 }) }) }); p .then( (username)=>{ // this only runs when the previous resolves return User.findOne({ username }).exec() }, (data)=>{ // in fact in your case you dont even have to have the reject callback return data } ) .then( (found)=>{ return new Promise(function(resolve, reject) { if (found && /*your logic to determine it's not 403*/) resolve(user) else reject({ msg: 'unauthorized, no account in DB', status: 403 }) }) } ) .then( (found)=>{ return new Promise(function(resolve, reject) { if (found && /*your logic to determine it's not 403*/) resolve(user) else reject({ msg: 'unauthorized, no account in DB', status: 403 }) }) } ) .then( (user)=>{ return new Promise(function(resolve, reject) { if (/*your logic to determine it has the full info*/) resolve(user) else return ad.getUserDetails(username, password) }) } ) .then( (user)=>{ // all is good, do the good logic }, (data)=>{ // something wrong, so here you can handle all the reject in one place res.send(data) } )
You need to link your Facebook external login to your Google external login with your email by using UserManager.AddLoginAsync, you cannot register twice using the same adresse if you use the adresse as login. Check out the Identity sample on Identity github repo. https://github.com/aspnet/Identity/blob/dev/samples/IdentitySample.Mvc/Controllers/ManageController.cs To link external login to a user, the Manae controller expose methods LinkLogin and LinkLoginCallback LinkLogin requests a redirect to the external login provider to link a login for the current user LinkLoginCallback processes the provider response // // POST: /Manage/LinkLogin [HttpPost] [ValidateAntiForgeryToken] public IActionResult LinkLogin(string provider) { // Request a redirect to the external login provider to link a login for the current user var redirectUrl = Url.Action("LinkLoginCallback", "Manage"); var properties = _signInManager.ConfigureExternalAuthenticationProperties(provider, redirectUrl, _userManager.GetUserId(User)); return Challenge(properties, provider); } // // GET: /Manage/LinkLoginCallback [HttpGet] public async Task<ActionResult> LinkLoginCallback() { var user = await GetCurrentUserAsync(); if (user == null) { return View("Error"); } var info = await _signInManager.GetExternalLoginInfoAsync(await _userManager.GetUserIdAsync(user)); if (info == null) { return RedirectToAction(nameof(ManageLogins), new { Message = ManageMessageId.Error }); } var result = await _userManager.AddLoginAsync(user, info); var message = result.Succeeded ? ManageMessageId.AddLoginSuccess : ManageMessageId.Error; return RedirectToAction(nameof(ManageLogins), new { Message = message }); }
First of all, every API request must go through https. Then you can "secure" user-specific APIs by giving each user a unique token which must be sent at every request. It is as well possible to check the host or useragent of the user which requests the API and allow only specific custom useragents (depending on your needs). Other than that: If you need a JSON response while the user is logged in on the same server, you can check if a given cookie or session is set and can be related to that one specific user. If you do server to server requests for that API, you could check if the server hostname is valid and matches the one(s) who are allowed to have access. You can as well use encryption to secure your API response (here as well: depending on your needs). If this is true, you can use a private/public key encryption similar to GPG/PGP. Of course, only the one who should have access to the API should be allowed to decrypt the response. GUID (Globally Unique Identifier) may be an option if you don't care if anyone could find out the path to your API. GUID URLs could look like this: example.com/api/v1/c9a646d3-9c61-4cb7-bfcd-ee2522c8f633
The easy way: Map an activeSessionId field to your User class: /** * @ORM\Entity * @ORM\Table(name="fos_user") */ class User extends BaseUser { /** * @ORM\Id * @ORM\Column(type="integer") * @ORM\GeneratedValue(strategy="AUTO") */ protected $id; /** * @ORM\Column(type="string", length=255, nullable=true) */ protected $activeSessionId; public function loginWithSessId($sessionId) { $this->activeSessionId = $sessionId; } public function logout() { $this->activeSessionId = null; } public function getActiveSessId() { return $this->activeSessionId; } } Then listen to the security.interactive_login event that will be fired every time the user log in, and save a reference of the session id together with the user: namespace AppBundle\Security; use Symfony\Component\EventDispatcher\EventSubscriberInterface; use Symfony\Component\Security\Http\Event\InteractiveLoginEvent; use Symfony\Component\Security\Http\SecurityEvents; use FOS\UserBundle\Model\UserManagerInterface; class LoginListener implements EventSubscriberInterface { private $userManager; public function __construct(UserManagerInterface $userManager) { $this->userManager = $userManager; } public static function getSubscribedEvents() { return array( SecurityEvents::INTERACTIVE_LOGIN => 'onSecurityInteractiveLogin', ); } public function onSecurityInteractiveLogin(InteractiveLoginEvent $event) { $user = $event->getAuthenticationToken()->getUser(); $session = $event->getRequest()->getSession(); $user->loginWithSessId($session->getId()); $this->userManager->updateUser($user); } } You can then register the listener with: <service id="app_bundle.security.login_listener" class="AppBundle\Security\LoginListener"> <argument type="service" id="fos_user.user_manager"/> <tag name="kernel.event_subscriber" /> </service> or # app/config/services.yml services: app_bundle.security.login_listener: class: AppBundle\Security\LoginListener arguments: ['@fos_user.user_manager'] tags: - { name: kernel.event_subscriber } Now that your User entity know which session is the last one, you can creare a listener to the security.authentication.success event, and check if the current session id match with the last active one. If it doesn't, then it's not an active session anymore. namespace AppBundle\Security; use Symfony\Component\Security\Core\AuthenticationEvents; use Symfony\Component\EventDispatcher\EventSubscriberInterface; use Symfony\Component\Security\Core\Event\AuthenticationEvent; use Symfony\Component\HttpFoundation\RequestStack; use FOS\UserBundle\Model\UserManagerInterface; class AuthenticationListener implements EventSubscriberInterface { private $requestStack; private $userManager; public function __construct(RequestStack $requestStack, UserManagerInterface $userManager) { $this->requestStack = $requestStack; $this->userManager = $userManager; } public static function getSubscribedEvents() { return array( AuthenticationEvents::AUTHENTICATION_SUCCESS => 'onAuthenticationSuccess', ); } public function onAuthenticationSuccess(AuthenticationEvent $event) { $token = $event->getAuthenticationToken(); $sessionId = $this->requestStack->getMasterRequest()->getSession()->getId(); $activeSessId = $token->getUser()->getActiveSessId(); if ($activeSessId && $sessionId !== $activeSessId) { $token->setAuthenticated(false); // Sets the authenticated flag. } } } Finally: <service id="app_bundle.security.auth_listener" class="AppBundle\Security\AuthenticationListener"> <argument type="service" id="request_stack"/> <argument type="service" id="fos_user.user_manager"/> <tag name="kernel.event_subscriber" /> </service> or # app/config/services.yml services: app_bundle.security.auth_listener: class: AppBundle\Security\AuthenticationListener arguments: ['@request_stack', '@fos_user.user_manager'] tags: - { name: kernel.event_subscriber }
For 1, you can use the Code Injector plugin. See this question For 2, I failed to understand what you mean by "integrate the generated SchemaType class into the Schema class". For 3 - yes, with your own XJC plugin but it's probably a bit hard. :) A recommendation: use schema-derived classes as DTOs only, don't try to push much business logic there. Update It's still a bit hard to understand what you want to achieve. You explain what you want to do for your user, with all these "is-a" and "has-a", but it's still not clear what do you mean by "integrate". From the other hand the whole story reduces to the following question: Which code is generated now and which code do you want to be generated? That's the core. If you answer this you'll arrive at the programming question which can be answered. Right now you just describe your use case and expect someone to design a solution for you. From what I understand you just want your schema-derived classes DelimitedSchema and FixedWidthSchema to actually implement (or extend) your base class Schema. So why don't you do just that? With xjc:extends (or JAXB Inheritance Plugin) you can easily make DelimitedSchema and FixedWidthSchema extend Schema. Your Schema class will probably be an abstract class which defines a couple of abstract methods which can only be implemented by specific implementations. This can be done by injecting code using the Code Injector plugin. You just inject implementations of abstract methods from the Schema class into your DelimitedSchema and FixedWidthSchema classes. Then instances of these classes can be returned to the user as implementations of Schema. What puzzles me is that you actually already know all these elements. You knew about xjc:extends, code injection and so on. What is missing? Finally, a few recommendations. As I mentioned before, you better use schema-derived classes as DTOs only. Integrating schema-derived code with business logic often results in unmaintainable mess. You better cleanly model your business classes and copy data from DTOs to/from them. This might seem more work first but will pay off later. For instance when you'd need to support several versions of exchange schema in parallel. The fact that you say "normal code would be a piece of cake" is a symptom. You're fighting code generation trying to make it intelligent, but maybe it should be left dumb. Better not move unmarshal/marshal methods to business classes. Keep serialization separate from business model. Implement a separate SchemaReader or something like that.
Hope this should help you. public void createUser() { final String randPassword = getRandonPassword(); final String userName= "someuser"; final String email = "[email protected]"; authenticationService.setAuthentication(userName, randPassword.toCharArray()); System.out.println(randPassword); AuthenticationUtil.runAs(new AuthenticationUtil.RunAsWork<Void>() { public Void doWork() throws Exception { Map<QName, Serializable> properties = new HashMap<QName, Serializable>(); properties.put(ContentModel.PROP_USERNAME,userName); properties.put(ContentModel.PROP_PASSWORD,randPassword); properties.put(ContentModel.PROP_EMAIL,email); NodeRef personNodeRef = personService.createPerson(properties); personService.notifyPerson(userName, randPassword); return null; } }, AuthenticationUtil.getSystemUserName()); } private String getRandonPassword() { Calendar calendar = Calendar.getInstance(); SecureRandom random = new SecureRandom(); String randomPassword = new BigInteger(130, random).toString(32); randomPassword = randomPassword +"-" + calendar.getTimeInMillis(); return randomPassword ; }
Ok this is my stripped out example which should illustrate the point: @Injectable() export class AuthGuardService implements CanActivate { toUrl; constructor(public authenticationService: AuthenticationService, public router: Router) { } canActivate(route, state): boolean { this.toUrl = state.url; //This is the url where its going if (this.authenticationService.isLoggedIn()) return true; this.router.navigate(['/login', {redirect: this.toUrl}]); } } and in the login component use the ngOnInit to check for any redirect ulrs: export class LoginComponent { redirect; constructor( private authenticationService: AuthenticationService, private route: ActivatedRoute, private router: Router) { } ngOnInit() { this.redirect = this.route.snapshot.params['redirect']; } logIn(): void { this.authenticationService .login(this.searchParams) .subscribe( () => { this.logInSuccess(); }, error => { this.logInFail(error) }) } logInSuccess(): void { if (this.redirect) { this.router.navigateByUrl(this.redirect); } else { this.router.navigate(['']) } } }
From the Source - the dynamic feature will ENABLE authentication on @PermitAll, not disable it. See this from AuthDynamicFeature: final boolean annotationOnClass = (resourceInfo.getResourceClass().getAnnotation(RolesAllowed.class) != null) || (resourceInfo.getResourceClass().getAnnotation(PermitAll.class) != null); final boolean annotationOnMethod = am.isAnnotationPresent(RolesAllowed.class) || am.isAnnotationPresent(DenyAll.class) || am.isAnnotationPresent(PermitAll.class); if (annotationOnClass || annotationOnMethod) { context.register(authFilter); So in order to not have auth on a specific resource, you can never apply it to class level (since it will then apply to all your resource methods). See this example: public class AuthenticatorTest extends io.dropwizard.Application<DBConfiguration> { @Override public void run(DBConfiguration configuration, Environment environment) throws Exception { environment.jersey().register(new MyHelloResource()); UserAuth a = new UserAuth(); environment.jersey().register(new AuthDynamicFeature(new BasicCredentialAuthFilter.Builder<Principal>() .setAuthenticator(a).setRealm("SUPER SECRET STUFF").buildAuthFilter())); } public static void main(String[] args) throws Exception { new AuthenticatorTest().run("server", "/home/artur/dev/repo/sandbox/src/main/resources/config/test.yaml"); } @Path("test") @Produces(MediaType.APPLICATION_JSON) public static class MyHelloResource { @GET @Path("asd") @PermitAll @UnitOfWork public String test(String x) { return "Hello"; } @GET @Path("asd2") public String test2() { return "test2"; } } public static class Person implements Principal { @Override public String getName() { return null; } } public static class UserAuth implements Authenticator<BasicCredentials, Principal> { @Override public Optional<Principal> authenticate(BasicCredentials credentials) throws AuthenticationException { return Optional.of(new Principal() { @Override public String getName() { return "artur"; } }); } } } The MyHelloResource has 2 methods: test and test2. test applies @PermitAll to enable auth, while test2 does nothing like that. This means that auth is not registered to test2. Here's the execution: artur@pandaadb:~/dev/eclipse/eclipse_jee$ curl localhost:9085/api/test/asd -v * Trying 127.0.0.1... * Connected to localhost (127.0.0.1) port 9085 (#0) > GET /api/test/asd HTTP/1.1 > Host: localhost:9085 > User-Agent: curl/7.47.0 > Accept: */* > < HTTP/1.1 401 Unauthorized < Date: Tue, 01 Nov 2016 10:30:10 GMT < WWW-Authenticate: Basic realm="SUPER SECRET STUFF" < Content-Type: text/plain < Content-Length: 49 < * Connection #0 to host localhost left intact Credentials are required to access this resource.artur@pandaadb:~/dev/eclipse/eclipse_jee$ artur@pandaadb:~/dev/eclipse/eclipse_jee$ curl localhost:9085/api/test/asd2 -v * Trying 127.0.0.1... * Connected to localhost (127.0.0.1) port 9085 (#0) > GET /api/test/asd2 HTTP/1.1 > Host: localhost:9085 > User-Agent: curl/7.47.0 > Accept: */* > < HTTP/1.1 200 OK < Date: Tue, 01 Nov 2016 10:30:14 GMT < Content-Type: application/json < Vary: Accept-Encoding < Content-Length: 5 < * Connection #0 to host localhost left intact test2 The first method denies access with a 401, while the second correctly prints test2.
Background: The Google .NET client library by default stores the credentials for the users in %AppData% the field where you have "user" is how its is storing it. Example: UserCredential credential; using (var stream = new FileStream(clientSecretsJsonFilePath ,FileMode.Open ,FileAccess.Read)) { credential = GoogleWebAuthorizationBroker.AuthorizeAsync( GoogleClientSecrets.Load(stream).Secrets, new[] { DriveService.Scope.Drive, DriveService.Scope.DriveFile }, "LookIAmAUniqueUser", CancellationToken.None, new FileDataStore("Drive.Auth.Store") ).Result; } Assuming the user clicks accept on the authentication request screen, a new file will be created in that directory with the following structure: Google.Apis.Auth.OAuth2.Responses.TokenResponse-LookIAmAUniqueUser.TokenResponse-LookIAmAUniqueUser Each user will have their own file you change a user by changing the "LookIAmAUniqueUser" value. Solution one: Identify your users differently so that you know you are loading me vs you. Just by changing the "user" parameter, it will load the one needed or ask the user for authentication if it can't find it. Solution two: By default the library uses FileDataStore that's why I have it in mine and you don't have it in yours. If you are storing the credentials someplace else say the refresh token in the database along with your user information. You can create your own implementation of IDataStore which will load the credentials from there. My article on FileDataStore might help you understand what its doing. Google .NET – FileDataStore demystified sorry I haven't had time to create a article on creating an implementation of IDataStore, however I might have an example or two laying around depends really on where you are storing those credentials
This is an excellent start. These kind of questions are tricky and there is no way to prove these things secure. There are some good conceptual "pillars" to guide ones thoughs on it: The pillars of security: Privacy: This code does not provide it. An attacker in the middle can read the structure of the message and can understand almost all of it. This gives them a strong stance. This system is open to replay attacks. Authentication By matching the password hash you are giving a strong assurance that this person does indeed know the password. PBKDF2 with a salt is state of the art and looks like you have this down. Integrity: This code does not provide it. the public key could be changed in flight. An attacker can substitute their own public key and cause the system to generate messages that they then could read. This attack is dependent on the rest of the system to detect the breach and respond to it, by comparing the public and private keys. This could open the system to known or unknown crypto attacks by allowing a "chosen key attack" which is generally considered dangerous. You really need to assure the integrity of the entire message. An attacker can take a password and key they do know along with a private key they do know, and switch them. Combined with replay attacks this will likely break the system. Suggestions: The structure of the entire message must be authenticated. There are two approaches to this. Either use a keyed MAC (Message Authentication Code) or use an "Authenticated Encryption" algorithm. MACs are included in more of the common crypto libraries. Don't roll your own MAC, and don't try to use a hash for this. The privacy of the message should be ensured. This can be accomplished by ensuring that The message is send over TLS (you may already be doing this). the message must include protection against replay attacks. This can be done in many ways. One strong way is to use a NONCE (Number used ONCe) so the server will only ever accept each message once. This must not be "per user" because many replay attacks are cross user. The part you are absolutly doing correctly is asking for public scrutiny early in the process. This puts you way ahead of the industry norm. remember that "Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can't break." https://www.schneier.com/blog/archives/2011/04/schneiers_law.html EDIT: make sure the password that protects them from you guessing their private key is not the same password you use to authenticate them (and that there is no way for them to use the same password)
Spotify API playlists endpoint requires authentication token. Very primitive example, in those lines you can get Auth Token: // use the access token to access the Spotify Web API request.get(options, function(error, response, body) { console.log(body); token = access_token; }); Then, your code for getting playlists: var token = ''; app.get('/playlists', function(req, res) { var state = generateRandomString(16); res.cookie(stateKey, state); // your application requests authorization var scope = 'playlist-read-private'; res.redirect('https://api.spotify.com/v1/me/playlists?' + querystring.stringify({ access_token: token, token_type: 'Bearer', response_type: 'code', client_id: client_id, scope: scope, redirect_uri: redirect_uri, state: state })); }); First, you visiting 'http://localhost:8888/login` for authentification, then, you going to 'http://localhost:8888/playlists' for playlists.
If you were developing with .Net Framework, the AcquireTokenAsync do provide the methed using the UserPasswordCredential. Here is the code sample for your reference: AuthenticationContext authenticationContext = new AuthenticationContext(UserModeConstants.AuthString, false); string resrouce = ""; string clientId = ""; string userName = ""; string password = ""; UserPasswordCredential userPasswordCredential = new UserPasswordCredential(userName, password); var token= authenticationContext.AcquireTokenAsync(resrouce, clientId, userPasswordCredential).Result.AccessToken; I am using the version 3.13.5.907 Microsoft.IdentityModel.Clients.ActiveDirectory. And this method only work for the native client application you register on Azure AD since it doesn't provide the credential. If you want it work for the web application/web API, you can make a HTTP request directly like below: POST: https://login.microsoftonline.com/xxxxx.onmicrosoft.com/oauth2/token Content-Type: application/x-www-form-urlencoded resource={resource}&client_id={clientId}&grant_type=password&username={userName}&password={password}&scope=openid&client_secret={clientSecret}
It all depends what you want and why you're asking, a little more context might be applicable: Do you want to monitor from a Clojure process, or do you want to monitor a Clojure process ? Which of these two processes do you have author control over ? JMX client clojure.java.jmx is a client library to call Java JMX functions from Clojure, either locally or over the wire (JMX over RMI). It's basically the programming alternative to GUI's like JVisualVM / JMC. JMX server Over RMI While you can always use JMX calls from within a Java process itself, to be able to monitor a Java / Clojure process remotely, you'd still need to include the -Dcom.sun.management.jmxremote parameters. This makes the process act as a server for JMX requests over an RMI connection. Over HTTP An alternative to RMI is serving JMX through a REST interface via jolokia. RMI is notoriously hard to manage over network infrastructure borders like firewalls, and to secure through authentication and authorization. JMX over REST is much easier to manage (reverse proxy exclusions for JMX URLs). Authentication and authorization can also be done in your current web stack. There's no Clojure client for these REST calls though, but it should be easy to mirror the clojure.java.jmx API and have it generate HTTP requests. Exposing JMX beans If you have a Clojure app that needs to expose application specific metrics that can be read through JMX, you need turn it into an JMX MBean. You can wrap any clojure map ref in a bean, and any updates to that ref can be seen through a JMX request. clojure.java.jmx can help here as well. (def my-statistics (ref {:current-sessions 0})) (jmx/register-mbean (jmx/create-bean my-statistics) "my.namespace:name=Sessions") (defn login [user] ;(do-login user) (dosync (alter my-statistics update :current-sessions inc))) (login "foo") ;just as an example to show you can read the bean through JMX (jmx/attribute-names "my.namespace:name=Sessions") => (:current-sessions) (jmx/read "my.namespace:name=Sessions" :current-sessions) => 1 Make sure the ref map has string/symbol keys and Java 'primitive' values, or you might get a read exception on the client side. The data from this bean can then be requested through JMX, provided you have set up a way to connect to the process.
Disclosure: I work at Auth0. Disclaimer: If you really set your mind on using Firebase from a practical point of view this might not help you as it focuses on what Auth0 provides to solve problems similar to the one you described. However, from a theoretical point of view this might help you so I deemed it worthwhile to share. Enough with the legal stuff... Check this guide for a fully detailed view on how Auth0 supports migrating users from your custom store to a hosted one. ... automatic migration of users to Auth0 from a custom database connection. This feature adds your users to the Auth0 database one-at-a-time as each logs in and avoids asking your users to reset their passwords all at the same time. The approach would be similar to your option C, but the only thing that would need to stay from the old system would be the database. Everyone would start using the new application and the login would happen transparently for the users. Depending on the API's made available by Firebase, you could most likely implement something similar and that would be my recommendation. Additionally, you should not even consider any process that includes manual steps and has to deal with plain text passwords. A final note, excellent decision on rebuilding your app to use an external authentication service, even if it's not Auth0. :) Authentication is a hard problem and wish more application developers stopped wasting time with issues totally unrelated to the business problems that their applications solve.
I would like to suggest the following approach, 1. Create a column with the name tenant ID for each of the table that contains core business data this is not required for any mapping table. Use the approach B, by creating an extension method that returns an IQueryable. This method can be an extension of the dbset so that anyone writing a filter clause, can just call this extension method followed by the predicate. This would make the task easier for developers to write code without bothering about tenant ID filter. This particular method will have the code to apply the filter condition for the tenant ID column based on the tenant context in which this query is being executed. Sample ctx.TenantFilter().Where(....) Instead of relying upon the http context you can have tenant ID passed in all of your service methods so that it will be easy for handling the tenant contacts in both the web and the web job applications. This makes a call free from contacts and more easily testable. The multi tenant entity interface approach looks good and we do have a similar limitation in our application which works fine so far. Regarding adding index you would be required to add an index for tenant ID column in the tables that have tenant ID and that should take care of the DB side query indexing part. Regarding the authentication part, I would recommend to use asp.net identity 2.0 with the owin pipeline. The system is very extensible customisable and easy to integrate with any external identity providers if need be in future. Please do take a look at the repository pattern for entity framework which enables you to write lesser code in a generic fashion. This would help us get rid of code duplication and redundancy and very easy to test from unit test cases
Actually, the basic authentication mechanism uses session to store the visitors identity so when you once get authenticated (providing credentials in a login form) the application doesn't ask for the password again when you visit another page after login. So, the session is used to keep the user's current state in the application. This is what happens in most of the cases. On the other hand, the stateless authentication is used without using the session. In this case, the application doesn't keep any data into the session to identify the user on subsequent requests. Instead, it verifies every request independently. When you gonna need this? Basically, it's needed when you are going to build some kind of API which may serve resources as service to users where a user may send a request to your API to get data from anywhere, I mean the user may not be a registered user of your system but you may allow a user to consume data from your server depending on some sort of token based authentication. This is not enough to describe the stateless auth but this may give you some idea. Further, you may check How to do stateless (session-less) & cookie-less authentication and this and also you'll find useful links if you search on Google using the term Stateless Authentication.
There are two ways to access private user data with Google APIs. Strait Oauth2. where you have a consent for asking the owner of the account if you can access it Service accounts which are technically pre authorized by the developer. Normally I would say because you are only accessing the one account that you own, use a service account. Unfortunately the YouTube API does not support service account authentication. Due to the lack of service account support you will have to use Oauth2. I have done this in the past. Authentication your script once, using a server sided language of some kind. The Authentication server will return to you a Refresh token. Refresh tokens can be used at any time to get a new access token. Access tokens are used to access Google APIs and are only valid for an hour. Save this refresh token someplace. You will then be able to allow access the YouTube account in question when ever you like. Note: You will have to watch it. Refresh tokens can on rare occasion become invalid. I recommend having a script ready that will allow you to re authenticate the application again storing a new refresh token. Its rare that it happens but it can happen best to be pre-paired. Oauth Play ground Part of the point of Oauth is that it identifies your application to Google though the creation of your project on Google developer console. Things like quota and access to which APIs is controlled though that. If you spam the API they will know and shut you down. (never seen this happen) When you request access of a user it pops up with the name of the project on google developer console. This is identified by the client id and client secrete for that project on google developer console. When I use oauth playground I get asked 'Google OAuth 2.0 Playground would like to ..' So by using playground you are using Googles client id and client secrete to create a refresh token for yourself. If N other devs are also doing this the quota for YouTube may be used up in the course of a day. Also security wise you are now giving that project access to your data. Ignore that for a second what if google suddenly decides to remove change the client id or generate a new one. Your refresh token will no longer work. What if random dev X is using it as well and he starts spamming everything and the client id gets shut down (Think this happened last year) your going to have to wait for google to upload a new client id for the one that has now been banned. Google OAuth 2.0 Playground might seam nice but its not for daily use IMO its good for testing nothing more. Create your own project and get your own access its not hard just requires a programing language that can handle a http Post. My tutorial Google 3 legged oauth2 flow
Note: I won't explain how to decrypt the data, but that should be rather easy to figure out using the code for encryption and the documentation-links provided. First of all, the user has to be able to select a file via an input element. <input type="file" id="file-upload" onchange="processFile(event)"> You can then load the content of the file using the HTML5 FileReader API function processFile(evt) { var file = evt.target.files[0], reader = new FileReader(); reader.onload = function(e) { var data = e.target.result; // to be continued... } reader.readAsArrayBuffer(file); } Encrypt the acquired data using the WebCrypto API. If you don't want to randomly generate the key use crypto.subtle.deriveKey to create a key, for example, from a password that the user entered. // [...] var iv = crypto.getRandomValues(new Uint8Array(16)); // Generate a 16 byte long initialization vector crypto.subtle.generateKey({ 'name': 'AES-CBC', 'length': 256 ]}, false, [ 'encrypt', 'decrypt' ]) .then(key => crypto.subtle.encrypt({ 'name': 'AES-CBC', iv }, key, data)) .then(encrypted => { /* ... */ }); Now you can send your encrypted data to the server (e.g. with AJAX). Obviously you will also have to somehow store the Initialization Vector to later successfully decrypt everything. Here is a little example which alerts the length of the encrypted data. Note: If it says Only secure origins are allowed, reload the page with https and try the sample again (This is a restriction of the WebCrypto API): HTTPS-Link function processFile(evt) { var file = evt.target.files[0], reader = new FileReader(); reader.onload = function(e) { var data = e.target.result, iv = crypto.getRandomValues(new Uint8Array(16)); crypto.subtle.generateKey({ 'name': 'AES-CBC', 'length': 256 }, false, ['encrypt', 'decrypt']) .then(key => crypto.subtle.encrypt({ 'name': 'AES-CBC', iv }, key, data) ) .then(encrypted => { console.log(encrypted); alert('The encrypted data is ' + encrypted.byteLength + ' bytes long'); // encrypted is an ArrayBuffer }) .catch(console.error); } reader.readAsArrayBuffer(file); } <input type="file" id="file-upload" onchange="processFile(event)">
Instead of using the CorsRegistry you can write your own CorsFilter and add it to your security configuration. Custom CorsFilter class: public class CorsFilter implements Filter { @Override public void init(FilterConfig filterConfig) throws ServletException { } @Override public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException { HttpServletResponse response = (HttpServletResponse) servletResponse; HttpServletRequest request= (HttpServletRequest) servletRequest; response.setHeader("Access-Control-Allow-Origin", "*"); response.setHeader("Access-Control-Allow-Methods", "GET,POST,DELETE,PUT,OPTIONS"); response.setHeader("Access-Control-Allow-Headers", "*"); response.setHeader("Access-Control-Allow-Credentials", true); response.setHeader("Access-Control-Max-Age", 180); filterChain.doFilter(servletRequest, servletResponse); } @Override public void destroy() { } } Security config class: @Configuration @EnableWebSecurity public class OAuth2SecurityConfiguration extends WebSecurityConfigurerAdapter { @Bean CorsFilter corsFilter() { CorsFilter filter = new CorsFilter(); return filter; } @Override protected void configure(HttpSecurity http) throws Exception { http .addFilterBefore(corsFilter(), SessionManagementFilter.class) //adds your custom CorsFilter .exceptionHandling().authenticationEntryPoint(authenticationEntryPoint).and() .formLogin() .successHandler(ajaxSuccessHandler) .failureHandler(ajaxFailureHandler) .loginProcessingUrl("/authentication") .passwordParameter("password") .usernameParameter("username") .and() .logout() .deleteCookies("JSESSIONID") .invalidateHttpSession(true) .logoutUrl("/logout") .logoutSuccessUrl("/") .and() .csrf().disable() .anonymous().disable() .authorizeRequests() .antMatchers("/authentication").permitAll() .antMatchers("/oauth/token").permitAll() .antMatchers("/admin/*").access("hasRole('ROLE_ADMIN')") .antMatchers("/user/*").access("hasRole('ROLE_USER')"); } }
JWT is just a token, not an authentication/authorization protocol. Using JWT can be secure or insecure it depends on how you use it. JWT is the native token used by OIDC. You're probably referring to the usual use of JWT as value tokens, which may be less secure than reference tokens. See here for a somewhat clearer explanation. In the comments you ask about using what is essentially basic HTTP authentication (only using JWT instead of uuencode(user:password) as the token). There is nothing wrong with basic authentication as long as the communication is properly secured (https using TLS >= 1.1), but there are severe limitations to using this type of authentication. Any API authentication that requires username and password means every client of the app needs to have it's own user (principal). The whole stack of protocols on top of authentication in the paper I linked to is meant to solve several problems encountered by APIs. The two main ones are: Federation: the user logs in once to a central system and does not need to log in again for each system that trusts the central system. Delegation: the user delegates authority to a third party to allow it to do only specific tasks (and not others). This authority can be revoked individually without needing to change the permissions of other third parties or changing passwords. Federation is the feature that lets you log with your Facebook or Google account to StackOverflow without providing your password. Delegation allows you to specify what StackOverflow can do with your Facebook or Google account (e.g. get your personal info but not post). Delegation works on top of federation. The problem with all this is that it makes your API more difficult to access: give the client a username and password vs. make the user using the client log in to an identity system and allow the client to use the API. If your system is a B2B API with a single client application then using only authentication is fine. If your API involves users in any way or is meant to be consumed by third party applications then then you really have no choice but implementing the whole security stack (i.e. OIDC and OAuth). There is a (good) reason the stack was designed and it'll save you stumbling into the same problems it already solves.
As I'm not Rails expert so I can't go in too many details here. But to answer the I want some ideas question - from system architecture perspective you have 3 options: Front-end is Liferay and some functionalities are provided from external Ruby/Rails apps. Specific business logic can be developed in Ruby/Rails and exposed via remote services that Liferay will need to call or (if there is UI) embedded via IFrame or inside Liferay's CMS. [PROS] benefit from the flexibility Liferay provides to compose your site/pages, authentication and authorization, templates, collaboration tools, ... develop Ruby/Rails app unconstrained [CONS] additional work to either expose/call services or style the UI in a consistent way. no direct acces to Liferay's API form Ruby need SSO to allow logged in users Front-end is Liferay and some functionalities are provided from Ruby/Rails apps deployed in the portal. Liferay is written in Java so Ruby is not the most obvious choice here. That said there is a sample-ruby-portlet which demonstrates how to build portlets with Ruby. I'm not a Ruby expert myself. All I know is this is possible via JRuby. Not sure how Rails fits in that picture. [PROS] benefit from the flexibility Liferay provides to compose your site/pages, authentication and authorization, templates, collaboration tools, ... access to Liferay's API form Ruby easier to make consistent look and feel no need for SSO to allow logged in users [CONS] likely some constraints as to what can be done in Ruby Front-end is Rails app and uses some functionalities from Liferay. Liferay exposes all of it's functionality via JSON-WS services that can be called remotely. Aditionally most (if not all) portlet can be embeded on pages outside Liferay (there is JavaScript snippet provided in each portlet's configuration view). [PROS] develop Ruby/Rails app unconstrained benefit from the some OOTB Liferay functionality [CONS] additional work to either expose/call services or style the UI in a consistent way. no direct acces to Liferay's API form Ruby need SSO to allow logged in users Obviously you can mix and match as long as you are willing to tolerate the added complexity
The issue is here: org.springframework.security.authentication.dao.DaoAuthenticationProvider User '' not found On the provided link slide 6 it says firing organizationFilter When you look into security filters. They are actually those static rules that I mentioned earlier. so something is of a conflict there and the rule is being bypassed then it attempts to login (with no user credentials). It is all there in there logs just a matter of interpreting it correctly Right.. Comment out this first //grails.plugin.springsecurity.securityConfigType = 'Requestmap' //Then add grails.plugin.springsecurity.controllerAnnotations.staticRules = [ [pattern: '/', access: ['permitAll']], [pattern: '/error', access: ['permitAll']], [pattern: '/index', access: ['permitAll']], [pattern: '/index2.gsp', access: ['permitAll']], [pattern: '/shutdown', access: ['permitAll']], [pattern: '/assets/**', access: ['permitAll']], [pattern: '/**/js/**', access: ['permitAll']], [pattern: '/**/css/**', access: ['permitAll']], [pattern: '/**/images/**', access: ['permitAll']], [pattern: '/**/favicon.ico', access: ['permitAll']], [pattern: '/login/ajaxSuccess', access: ['permitAll']], [pattern: '/login/ajaxSuccess/**', access: ['permitAll']], [pattern: '/**/ajaxSuccess/**', access: ['permitAll']] ] I haved added 3 new rules at the very bottom, the very first one should fix the issue. But I added them just incase. Then the line above it you have changed from annotation to Requestmap but then you have controllerAnnotations.staticRules you do need to pay attention to the finer details here. If you set something to be something else then you need relevant configuration for that. Please note if you do wish to stick with Requestmap then maybe you need to configure grails.plugin.springsecurity.interceptUrlMap = [ [pattern: '/', access: ['permitAll']], [pattern: '/something/**', access: ['ROLE_ADMIN', 'ROLE_USER']], [pattern: '/**', access: ['permitAll']], ] For now I would stick with securityConfigType: Annotation
There are three aspects to acting as a second user. 1. GitHub Account To use the GitHub web interface as another user (e.g. fork a repository, submit a pull request, post comments) you need to sign in to another account.* Tip: Switching between accounts is a pain because you have to sign out and sign in each time. You can sign in to two accounts at the same time using a private browsing window, a different browser, or a different browser profile. 2. SSH Authentication A GitHub repository can be accessed over HTTPS or SSH. Both require authentication, which GitHub uses to implement permission levels. I'll describe how to clone a repository with SSH configured to authenticate as a second user. Generate a new SSH key using ssh-keygen -f KEYFILE where KEYFILE is the path to the new key (e.g. ~/.ssh/bob_rsa). Add the SSH key to the GitHub account of the second user. SSH needs to be configured to use ~/.ssh/bob_rsa, but only when you are trying to clone a repository as the second user. That way you can still clone repositories as your normal user with an SSH key you added to your normal GitHub account. Different configurations can be specified based on the host name, but for GitHub repositories the host name is always github.com. To specify configurations for only some of the cloned repositories, add a host alias by appending the following to ~/.ssh/config (credit): # alias for github.com with a custom SSH key Host bob.github.com HostName github.com IdentityFile ~/.ssh/bob_rsa I've used the host name bob.github.com, but it can be any string. Now you can clone a GitHub repository as the second user using the host name bob.github.com (or whichever host name you used in the SSH configuration): git clone [email protected]:USER/PROJECT.git If you clone a repository owned by your first user in this way, you should not be able to push commits to it until you add your second user as a collaborator. Testing the SSH Configuration If you encounter problems, check that SSH works by running ssh [email protected] (replace bob.github.com with the host name in the line Host XXX). The first time you connect to GitHub over SSH, you should get a message like The authenticity of host 'github.com (192.30.253.113)' can't be established. RSA key fingerprint is 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48. Are you sure you want to continue connecting (yes/no)? Type yes, then hit ENTER. (For the security-conscious, first check that the fingerprint is listed on GitHub's SSH Keys page). If your configuration is correct and you added the SSH key to the second GitHub account by pasting the contents of ~/.ssh/bob_rsa.pub in the "SSH and GPG Keys" page, you should see PTY allocation request failed on channel 0 Hi USER! You've successfully authenticated, but GitHub does not provide shell access. Connection to github.com closed. USER should be the name of the second account. If you've also set up SSH access for your normal account, you should be able to run ssh [email protected] and get the same message but where USER is the name of your normal account. 3. Git Author Metadata Git stores the author's name and email address with each commit. GitHub uses this information to display user avatars in the commit history for example. It is trivial to spoof this information (1, 2). The author's name and email address are usually stored in the global configuration file (~/.gitconfig). You can override them on a per-repository basis by running the following in the repository's directory: git config --local user.name "NAME" git config --local user.email "EMAIL" Replace NAME and EMAIL with the full name and email address of the second user. The --local flag modifies the per-repository configuration file (.git/config in the repository's root directory), as opposed to the --global flag which modifies ~/.gitconfig. The default is --local, so actually it could be omitted. Now you have a clone where you are effectively the second user. Use another (normal) clone to work as the first user. *Fine print: GitHub's Terms of Service only allow one free account per person.
HOWEVER, it seems to be accepted as law, that you don't perform computationally heavy tasks with Node as its a single thread architecture. I would reword this: don't perform computationally heavy tasks unless you need to with Node Sometimes, you need to crunch through a bunch of data. There are times when it's faster or better to do that in-process than it is to pass it around. A practical example: I have a Node.js server that reads in raw log data from a bunch of servers. No standard logging utilities could be used as I have some custom processing being done, as well as custom authentication schemes for getting the log data. The whole thing is HTTP requests, and then parsing and re-writing the data. As you can imagine, this uses a ton of CPU. Here's the thing though... is that CPU wasted? Am I doing anything in JS that I could do faster had I written it in another language? Often times the CPU is busy for a real reason, and the benefit of switching to something more native might be marginal. And then, you have to factor in the overhead of switching. Remember that with Node.js, you can compile native extensions, so it's possible to have the best of both worlds in a well established framework. For me, the human trade-offs came in. I'm a far more efficient Node.js developer than anything that runs natively. Even if my Node.js app were prove to be 5x slower than something native (which I'd imagine would be on the extreme), I could just buy 5 more servers to run, at much less cost than it would take for me to develop and maintain the native solution. Use what you need. If you need to burn a lot of CPU in Node.js, just make sure you're doing it as efficiently as you can be. If you find that you could optimize something with native code, consider making an extension and besure to measure the performance differences afterwards. If you feel the desire to throw out the whole stack... reconsider your approach, as there might be something you're not considering.
It seems you need OWIN OAuth 2.0 Authorization Server. This is the Microsoft extension to add the required functionality. It creates an oauth endpoint (e.g. /token) that you can use to get a token. You don't have a controller directly, but there is a special OWIN class connected to it that you will need to extend to add whatever you need. You can find more details here and here. It's a bit long reading, but it works and I have used it in a few projects. Here is a simple example how you can do it (GrantResourceOwnerCredentials is the most important method for you): public class SimpleAuthorizationServerProvider : OAuthAuthorizationServerProvider { public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context) { // Add CORS e.g. context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] { "*" }); using (AuthRepository _repo = new AuthRepository()) { IdentityUser user = await _repo.FindUser(context.UserName, context.Password); if (user == null) { context.SetError("invalid_grant", "The user name or password is incorrect."); return; } } var identity = new ClaimsIdentity(context.Options.AuthenticationType); identity.AddClaim(new Claim("sub", context.UserName)); identity.AddClaim(new Claim("role", "user")); context.Validated(identity); } }
Have not tried Angular2. Though you should be able to set img src to Blob URL of first File object of input.files FileList. At chromium, chrome you can get webkitRelativePath from File object, though the property is "non-standard" and possibly could be set to an empty string; that is, should not be relied on for the relative path to the selected file at user filesystem. File.webkitRelativePath This feature is non-standard and is not on a standards track. Do not use it on production sites facing the Web: it will not work for every user. There may also be large incompatibilities between implementations and the behavior may change in the future. File The webkitRelativePath attribute of the File interface must return the relative path of the file, or the empty string if not specified. 4.10.5.1.18. File Upload state (type=file) EXAMPLE 16 For historical reasons, the value IDL attribute prefixes the file name with the string "C:\fakepath\". Some legacy user agents actually included the full path (which was a security vulnerability). As a result of this, obtaining the file name from the value IDL attribute in a backwards-compatible way is non-trivial. See also How FileReader.readAsText in HTML5 File API works? <!DOCTYPE html> <html> <head> </head> <body> <img src="" width="100px" alt="preview"> <input type="file" multiple onchange="onUpload(this)" id="input" accepts="image/*" /> <br><label for="input"></label> <script> let url; function onUpload(element) { console.log(element) let file = element.files[0]; if (url) { URL.revokeObjectURL(url); } url = URL.createObjectURL(file); if ("webkitRelativePath" in file && file.webkitRelativePath !== "") { element.labels[0].innerHTML = file.webkitRelativePath; } else { element.labels[0].innerHTML = element.value; } element.previousElementSibling.src = url; element.value = null; } </script> </body> </html>
You can send Mails using Laravel Mail Class like this, Your controller should look like this: use Mail; class EmailsController { public function send(Request $request) { $email = $request->get('email'); Mail::send('emails.send', ['email' => $email], function ($message) use ($email) { $message->from('[email protected]', 'Your Name'); $message->to($email); }); return response()->json(['message' => 'Invitation Email Sent!']); } } Your view should be in directory - resources/views/emails/send.php <html> <head></head> <body style="background: black; color: white"> <h1>Email Invitation</h1> <p>Hello - {{$email}}</p> <p>....</p> </body> </html> Note: Remember to configure your .env file for mails: MAIL_DRIVER=smtp MAIL_HOST=smtp.gmail.com MAIL_PORT=587 [email protected] MAIL_PASSWORD=apppassword MAIL_ENCRYPTION=tls Don't forget to run php artisan config:cache after you make changes in your .env file. Hope this helps! Also remember to configure your mail.php file ioside config/mail.php like this: /* |-------------------------------------------------------------------------- | Global "From" Address |-------------------------------------------------------------------------- | | You may wish for all e-mails sent by your application to be sent from | the same address. Here, you may specify a name and address that is | used globally for all e-mails that are sent by your application. | */ 'from' => ['address' => '[email protected]', 'name' => 'Your name'], Configure this file above, if you want more help then - see this
As per your new code: you have an extra bracket, again in , 256)));. Here: INSERT INTO `user` (`username`, `salt`, `passwordhash`) VALUES ('username', 'a1b2c3d4e5f6g', SHA2(CONCAT('password', 'a1b2c3d4e5f6g'), 256)); This and with all the comments I left up there. Edit: The following, as per going through the MySQL manual on SHA2() could be a related issue. As per https://dev.mysql.com/doc/refman/5.5/en/encryption-functions.html#function_sha2 This function works only if MySQL has been configured with SSL support. See Section 6.4, β€œUsing Secure Connections”. SHA2() can be considered cryptographically more secure than MD5() or SHA1(). SHA2() was added in MySQL 5.5.5. So, make sure that the MySQL version you are using matches it or is higher than that. If it is lower than 5.5.5, then SHA2() isn't available for you to use. That, or it could be an sysadmin / security issue on the server. Contact the sysadmin at your school if that is where you are running this off from, as it seems from what you said in comments - "I'm following a school tutorial that said to use SHA2". Either way, the MySQL version is important here. To check the version of MySQL installed, use the following syntax in phpmyadmin: SHOW VARIABLES LIKE "%version%"; Instead of what you used being SELECT VERSION();.
The exception mentions: Encryption raised an exception. A possible cause is you are using strong encryption algorithms and you have not installed the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files in this Java Virtual Machine This answer only tries to fix that issue. I've written another answer to help the following issue, since these are totally different. If you live in a country that does allow it, you can go and download it from Oracle's website. Then, to install these unlimited strength packages, go into your $JAVA_HOME/jre/lib/security/ folder (assuming you have a JDK). There, backup your local_policy.jar and US_export_policy.jar. Now unzip the local_policy.jar and US_export_policy.jar files from the zip file you downloaded into that folder, and restart your application. Your application now have access to unlimited strength JCE capabilities. If anything goes wrong, revert the two files to their backup versions. Please note that each JVM that will have to run this code must be "patched" this way.
Open settings.xml for your maven settings configuration and add here server with your credentials. Usually this file is located in .m2 folder so add here something like: <servers> <server> <id>docker-hub</id> <username>username</username> <password>password</password> </server> </servers> This settings shouldn't be (and AFAIK can't be) in pom.xml because of security issues. If you are interested in more secure option you can encrypt your password like in example here. You have too messy pom.xml. Try to start with simplest pom.xml configuration. Check springio example and change springio property to your docker hub repo. <properties> <docker.image.prefix>springio</docker.image.prefix> </properties> <build> <plugins> <plugin> <groupId>com.spotify</groupId> <artifactId>docker-maven-plugin</artifactId> <version>0.4.11</version> <configuration> <imageName>${docker.image.prefix}/${project.artifactId}</imageName> <dockerDirectory>src/main/docker</dockerDirectory> <serverId>docker-hub</serverId> <!-- <registryUrl></registryUrl> is optional and defaults to https://index.docker.io/v1/ in the Spotify docker-client dependency. --> <resources> <resource> <targetPath>/</targetPath> <directory>${project.build.directory}</directory> <include>${project.build.finalName}.jar</include> </resource> </resources> </configuration> </plugin> </plugins> </build>
Your first problem is that you didn't ever actually use the func parameter to function_code... the code bytes from your indicated function never make it into the buffer. But fixing that won't get you very far, because there's a big problem with your whole approach. There's no guarantee made by either the C or C++ standard that functions are laid out in memory in the same sequence as they are in source code... in fact on any toolchain with any clue about optimization, they won't be. I suggest you use the gcc-specific pragmas for placing functions in a particular code segment to separate the functions of interest, then store and strip that segment from the executable. Perform your encryption/decryption on the entire segment as a unit instead of trying to find out where individual functions begin and end. Do note that the unencrypted code will be available in memory where skilled reversers will surely find it. Because you don't know what you're doing, you've probably already spent more time troubleshooting your function length code than a reverse engineer will spend getting to the decrypted bytes. Runtime decryption of code is very common for packers, and your dynamic allocation with execute access is going to stand out like a bright beacon in /proc/ pid /maps.
Can I use MemoryCache in an ITicketStore to store an AuthenticationTicket? Absolutely, here is the implementation that I have been using for nearly a year. app.UseCookieAuthentication(new CookieAuthenticationOptions { AuthenticationScheme = "App.Cookie", AutomaticAuthenticate = true, AutomaticChallenge = true, LoginPath = new PathString("/Authentication/SignIn"), LogoutPath = new PathString("/Authentication/SignOut"), ReturnUrlParameter = "/Authentication/SignIn", SessionStore = new MemoryCacheStore(cache) }); The implementation of the MemoryCacheStore looks like this, and it followed the example that you shared: public class MemoryCacheStore : ITicketStore { private const string KeyPrefix = "AuthSessionStore-; private readonly IMemoryCache _cache; public MemoryCacheStore(IMemoryCache cache) { _cache = cache; } public async Task<string> StoreAsync(AuthenticationTicket ticket) { var key = KeyPrefix + Guid.NewGuid(); await RenewAsync(key, ticket); return key; } public Task RenewAsync(string key, AuthenticationTicket ticket) { // https://github.com/aspnet/Caching/issues/221 // Set to "NeverRemove" to prevent undesired evictions from gen2 GC var options = new MemoryCacheEntryOptions { Priority = CacheItemPriority.NeverRemove }; var expiresUtc = ticket.Properties.ExpiresUtc; if (expiresUtc.HasValue) { options.SetAbsoluteExpiration(expiresUtc.Value); } options.SetSlidingExpiration(TimeSpan.FromMinutes(60)); _cache.Set(key, ticket, options); return Task.FromResult(0); } public Task<AuthenticationTicket> RetrieveAsync(string key) { AuthenticationTicket ticket; _cache.TryGetValue(key, out ticket); return Task.FromResult(ticket); } public Task RemoveAsync(string key) { _cache.Remove(key); return Task.FromResult(0); } }
You can write your own http message converter. Since you are using spring boot it would be quite easy: just extend your custom converter from AbstractHttpMessageConverter and mark the class with @Component annotation. From spring docs: You can contribute additional converters by simply adding beans of that type in a Spring Boot context. If a bean you add is of a type that would have been included by default anyway (like MappingJackson2HttpMessageConverter for JSON conversions) then it will replace the default value. And here is a simple example: @Component public class Converter extends AbstractHttpMessageConverter<Object> { public static final Charset DEFAULT_CHARSET = Charset.forName("UTF-8"); @Inject private ObjectMapper objectMapper; public Converter(){ super(MediaType.APPLICATION_JSON_UTF8, new MediaType("application", "*+json", DEFAULT_CHARSET)); } @Override protected boolean supports(Class<?> clazz) { return true; } @Override protected Object readInternal(Class<? extends Object> clazz, HttpInputMessage inputMessage) throws IOException, HttpMessageNotReadableException { return objectMapper.readValue(decrypt(inputMessage.getBody()), clazz); } @Override protected void writeInternal(Object o, HttpOutputMessage outputMessage) throws IOException, HttpMessageNotWritableException { outputMessage.getBody().write(encrypt(objectMapper.writeValueAsBytes(o))); } private InputStream decrypt(InputStream inputStream){ // do your decryption here return inputStream; } private byte[] encrypt(byte[] bytesToEncrypt){ // do your encryption here return bytesToEncrypt; } }
You were on the right track with the code posted in your initial question. The IdentityServerAuthenticationOptions object has properties to override the default HttpMessageHandlers it uses for back channel communication. Once you combine this with the CreateHandler() method on your TestServer object you get: //build identity server here var idBuilder = new WebBuilderHost(); idBuilder.UseStartup<Startup>(); //... TestServer identityTestServer = new TestServer(idBuilder); var identityServerClient = identityTestServer.CreateClient(); var token = //use identityServerClient to get Token from IdentityServer //build Api TestServer var options = new IdentityServerAuthenticationOptions() { Authority = "http://localhost:5001", // IMPORTANT PART HERE JwtBackChannelHandler = identityTestServer.CreateHandler(), IntrospectionDiscoveryHandler = identityTestServer.CreateHandler(), IntrospectionBackChannelHandler = identityTestServer.CreateHandler() }; var apiBuilder = new WebHostBuilder(); apiBuilder.ConfigureServices(c => c.AddSingleton(options)); //build api server here var apiClient = new TestServer(apiBuilder).CreateClient(); apiClient.SetBearerToken(token); //proceed with auth testing This allows the AccessTokenValidation middleware in your Api project to communicate directly with your In-Memory IdentityServer without the need to jump through hoops. As a side note, for an Api project, I find it useful to add IdentityServerAuthenticationOptions to the services collection in Startup.cs using TryAddSingleton instead of creating it inline: public void ConfigureServices(IServiceCollection services) { services.TryAddSingleton(new IdentityServerAuthenticationOptions { Authority = Configuration.IdentityServerAuthority(), ScopeName = "api1", ScopeSecret = "secret", //..., }); } public void Configure(IApplicationBuilder app) { var options = app.ApplicationServices.GetService<IdentityServerAuthenticationOptions>() app.UseIdentityServerAuthentication(options); //... } This allows you to register the IdentityServerAuthenticationOptions object in your tests without having to alter the code in the Api project.
Define an "effective one". While the three JavaScript snippets you did present do work, have you looked at ways to stop that? For example, in the iframe (which the attacker would control), add the sandbox attribute with an empty value: <iframe src="Inner.html" sandbox=""> iframe documentation Here is a sample test I did: Outer.html <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Outer Html</title> </head> <body> <h1>Outer Html</h1> <p> Outer Html </p> <iframe src="Inner.html" width="400" height="300" sandbox=""> <p>Your browser does not support iframes.</p> </iframe> </body> </html> Inner.html <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Inner Html</title> <script> if (top != self) { top.location.replace(self.location.href); } </script> </head> <body> <h1>Inner Html</h1> <p> Inner Html </p> </body> </html> X-Frame-Options If you are using the X-Frame-Options and the JavaScript code to make sure you are not in a frame, why not set the value to DENY instead of SAMEORIGIN? With SAMEORIGIN your attack surface is smaller, it will only be successful the iframe is coming from your site (possible with another vulnerability). If you are not going to be doing iframes on your site, you might as well go with the safer option of X-Frame-Options: DENY. I think this is what HPE Fortify WebInspect is expected. Content Security Policy (CSP) frame-ancestors directive Another option is to use the frame-ancestors directive with the 'none' value in the HTTP Header 'Content-Security-Policy'. This will prevent (in supported browsers) any domain from framing the content. This setting is recommended unless a specific need has been identified for framing. Great Resource You can find more information about prevetion of Clickjacking over at the OWASP's Clickjacking Defense Cheat Sheet
There is a quite simple answer to how to encrypt files. This script uses the XOR encryption to encrypt files. Encrypt the file a second time to decrypt it. #include <iostream> #include <fstream> #include <string> using namespace std; void encrypt (string &key,string &data){ float percent; for (int i = 0;i < data.size();i++){ percent = (100*i)/key.size(); //progress of encryption data[i] = data.c_str()[i]^key[i%key.size()]; if (percent < 100){ cout << percent << "%\r"; //outputs percent, \r makes }else{ //cout to overwrite the cout<< "100%\r"; //last line. } } } int main() { string data; string key = "This_is_the_key"; ifstream in ("File",ios::binary); // this input stream opens the // the file and ... data.reserve (1000); in >> data; // ... reads the data in it. in.close(); encrypt(key,data); ofstream out("File",ios::binary);//opens the output stream and ... out << data; //... writes encrypted data to file. out.close(); return 0; } This line of code is where the encryption happens: data[i] = data.c_str()[i]^key[i%key.size()]; it encrypts each byte individually. Each byte is encrypted with a char that changes during the encryption because of this: key[i%key.size()] But there are a lot of encryption methods, for example you could add 1 to each byte(encryption) and subtract 1 from each byte(decryption): //Encryption for (int i = 0;i < data.size();i++){ data[i] = data.c_str()[i]+1; } //Decryption for (int i = 0;i < data.size();i++){ data[i] = data.c_str()[i]-1; } I think the it's not very useful to show the progress because it is to fast. If you really want to make a GUI I would recommend Visual Studio. Hope that was helpful.
Encrypt the passwords with the user's password which you do not store. That way the passwords are never stored, only transient in RAM while being used. On creation of a user use password_hash to save the users hashed password. When the user needs to save a 3rd party password for later use authenticate the user with password_verify and then use the user's password to encrypt the 3rd party password and save that. Note that the server does not store the key to the encrypted 3rd party password. When the user wants to login to the 3rd party site he enters his password, it is verified against his hashed password with password_verify, the 3rd party password is decrypted with his password and sent to the 3rd party. Notes: a. The user's password is not used as the encryption key directly, the encryption function uses a key derivation function such as PBKDF2 to generate the actual encryption key. b. Both password_hash and the encryption key derivation function must have a high work factor, commonly about 100ms. Another avenue to explore is the encryption schemes password managers use or find an open source password manager implementation you can useβ€”be sure it is well vetted.
If we test this situation empirically, we will see, that everything longer than 117 bytes will fail: $msg = 'abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklm'; The line above represents 117 characters and 117 bytes in total. This works when encrypting with the public key you provided. If I add another character, n, encryption fails: $msg = 'abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmn'; Same thing goes with other unicode characters. Let's say I try to encrypt this, which is 85 characters long, but exactly 117 bytes in length: $msg = ' i β™₯ ui β™₯ ui β™₯ ui β™₯ ui β™₯ ui β™₯ ui β™₯ ui β™₯ ui β™₯ ui β™₯ ui β™₯ ui β™₯ ui β™₯ ui β™₯ ui β™₯ ui β™₯ ui u'; This gets encrypted perfectly. But if I add another byte, it fails (86 characters, 118 bytes): $msg = ' i β™₯ ui β™₯ ui β™₯ ui β™₯ ui β™₯ ui β™₯ ui β™₯ ui β™₯ ui β™₯ ui β™₯ ui β™₯ ui β™₯ ui β™₯ ui β™₯ ui β™₯ ui β™₯ ui uZ'; .. the openssl_public_encrypt() function will fail by design if you pass it more than 117 characters to encrypt. Snyder, Chris, Myer, Thomas, Southwell, Michael, ISBN 978-1-4302-3318-3 Further, in the book it says: Because RSA is expensive, and was never intended for encrypting quantities of data, if you are encrypting something that is routinely longer than 56 characters, you should be planning to encrypt your data using a fast and efficient symmetric algorithm like AES with a randomly generated key.
Use the OpenIdConnectDefaults.AuthenticationScheme constant when you add the authorization policy and when you add the authentication middleware. Here you are using OpenIdConnectDefaults. Good. Keep that line. services.AddAuthorization(configuration => { ... configuration.AddPolicy("OpenIdConnect", new AuthorizationPolicyBuilder() .AddAuthenticationSchemes(OpenIdConnectDefaults.AuthenticationScheme) // keep .RequireAuthenticatedUser().Build()); }); Here you are using CookieAuthenticationDefaults. Delete that line. app.UseOpenIdConnectAuthentication(new OpenIdConnectOptions { ... SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme // delete }); Why? When your OpenIdConnect authorization policy runs, it will look for an authentication scheme named OpenIdConnectDefaults.AuthenticationScheme. It will not find one, because the registered OpenIdConnect middleware is named CookieAuthenticationDefaults.AuthenticationScheme. If you delete that errant line, then the code will automatically use the appropriate default. Edit: Commentary on the sample A second reasonable solution The linked sample application from the comments calls services.AddAuthentication and sets SignInScheme to "Cookies". That changes the default sign in scheme for all of the authentication middleware. Result: the call to app.UseOpenIdConnectAuthentication is now equivalent to this: app.UseOpenIdConnectAuthentication(new OpenIdConnectOptions { SignInScheme = CookieAuthenticationDefaults.AuthenticationScheme } That is exactly what Camilo had in the first place. So why did my answer work? My answer worked because it does not matter what SignInScheme name we choose; what matters is that those names are consistent. If we set our OpenIdConnect authentication sign in scheme to "Cookies", then when adding an authorization policy, we need to ask for that scheme by name like this: services.AddAuthorization(configuration => { ... configuration.AddPolicy("OpenIdConnect", new AuthorizationPolicyBuilder() .AddAuthenticationSchemes(CookieAuthenticationDefaults.AuthenticationScheme) <---- .RequireAuthenticatedUser().Build()); }); A third reasonable solution To emphasize the importance of consistency, here is a third reasonable solution that uses an arbitrary sign in scheme name. services.AddAuthorization(configuration => { configuration.AddPolicy("OpenIdConnect", new AuthorizationPolicyBuilder() .AddAuthenticationSchemes("Foobar") .RequireAuthenticatedUser().Build()); }); Here you are using CookieAuthenticationDefaults. Delete that line. app.UseOpenIdConnectAuthentication(new OpenIdConnectOptions { SignInScheme = "Foobar" });
Here is another way to do this (Swift 3, Alamofire 4.x) using a DispatchGroup import Alamofire struct SequentialRequest { static func fetchData() { let authRequestGroup = DispatchGroup() let requestGroup = DispatchGroup() var results = [String: String]() //First request - this would be the authentication request authRequestGroup.enter() Alamofire.request("http://httpbin.org/get").responseData { response in print("DEBUG: FIRST Request") results["FIRST"] = response.result.description if response.result.isSuccess { //Authentication successful, you may use your own tests to confirm that authentication was successful authRequestGroup.enter() //request for data behind authentication Alamofire.request("http://httpbin.org/get").responseData { response in print("DEBUG: SECOND Request") results["SECOND"] = response.result.description authRequestGroup.leave() } authRequestGroup.enter() //request for data behind authentication Alamofire.request("http://httpbin.org/get").responseData { response in print("DEBUG: THIRD Request") results["THIRD"] = response.result.description authRequestGroup.leave() } } authRequestGroup.leave() } //This only gets executed once all the requests in the authRequestGroup are done (i.e. FIRST, SECOND AND THIRD requests) authRequestGroup.notify(queue: DispatchQueue.main, execute: { // Here you can perform additional request that depends on data fetched from the FIRST, SECOND or THIRD requests requestGroup.enter() Alamofire.request("http://httpbin.org/get").responseData { response in print("DEBUG: FOURTH Request") results["FOURTH"] = response.result.description requestGroup.leave() } //Note: Any code placed here will be executed before the FORTH request completes! To execute code after the FOURTH request, we need the request requestGroup.notify like below print("This gets executed before the FOURTH request completes") //This only gets executed once all the requests in the requestGroup are done (i.e. FORTH request) requestGroup.notify(queue: DispatchQueue.main, execute: { //Here, you can update the UI, HUD and turn off the network activity indicator for (request, result) in results { print("\(request): \(result)") } print("DEBUG: all Done") }) }) } }
How to identify users You're not explicit about which database technology you use, but in general you should be able to use regular strings as identifiers/keys. You do mention that you're using SQL variant so that may be the source of the issue; you should probably use a more specific text-based data type with a fixed length enough. The user_id is the result of concatenating the Auth0 identity provider identifier with the user identifier within that provider so we could argue that reaching a definitive max length is a little trickier. However, you can decide on arbitrary value, for example, something like 640 character ought to be enough for anyone. You can also identify your users by email; this works if every authentication provider being used by your application requires users to provide their email and you also don't intend to support different accounts with the same email address. A final alternative is for you to assign each user your own unique identifier that is better suited for how you intend to use it. You can achieve this by having an Auth0 rule update your user metadata with this new attribute and then request this attribute to be included in the generated token upon user authentication by the means of scopes. Depending on the approach you would neither need a simple lookup table mapping one form of identifier to your internal one or in the case you update the user metadata with your internal identifier you could skip that lookup table entirely and just the value coming from the JWT. How to handle first-time users Like you mentioned, you could at each API request make sure that if this is the first request issued by a new user then you create your notion of application profile before processing the request. The alternative to this would be triggering this application profile creation from within Auth0 when you detect that the user signup for the first time and then on the API always assume the profile exists. Both approaches are valid; I would go with the one that would leave you with a simpler implementation and still meets your requirements. How to handle users being banned If you do need to support the ability to immediately ban a user and don't allow any other request to the API then you'll always have to have some kind of query at each API request to see if the user was banned or not. This increases the complexity significantly so do consider that you can tolerate a solution where the lifetime of a token is shorter and banned users may still call your API within that short time frame.
Even if it's possible, in my opinion it has some disadvantages. In general I like clients to be as simple as possible to avoid maintenance issues. Instead I'd route all client requests through a REST API on my app server. The disadvantages are not related to Kafka, but are common problems of native clients. Coupling You're coupling the Android app closely to your messaging infrastructure. If you later decide that a Kafka solution is too much and Plain Old Java would be good enough, you'll first have to update the Android app and wait until enough users do an update. Network issues + delivery guarantees Kafka clients also require a direct connection to each of the brokers. Mobile clients can have very inconsistent/spotty network connectivity, making direct client access susceptible to dropped events and overall network connectivity issues. Authentication Probably you already have some kind of authentication in your app. You can also create authenticated connections to Kafka. So you'll have two authentication paths, whereas with an app server Kafka only needs to check if the requests are coming from the trusted app server, which means less implementation effort. ...
Instead of registering/removing service at runtime, i would create a service factory which decides right service at runtime. services.AddTransient<AuthenticationService>(); services.AddTransient<NoAuthService>(); services.AddTransient<IAuthenticationServiceFactory, AuthenticationServiceFactory>(); AuthenticationServiceFactory.cs public class AuthenticationServiceFactory: IAuthenticationServiceFactory { private readonly AuthenticationService _authenticationService; private readonly NoAuthService_noAuthService; public AuthenticationServiceFactory(AuthenticationService authenticationService, NoAuthService noAuthService) { _noAuthService = noAuthService; _authenticationService = authenticationService; } public IAuthenticationService GetAuthenticationService() { if(settings.Authentication == false) { return _noAuthService; } else { return _authenticationService; } } } Usage in a class: public class SomeClass { public SomeClass(IAuthenticationServiceFactory _authenticationServiceFactory) { var authenticationService = _authenticationServiceFactory.GetAuthenticationService(); } }
The message says Host key verification failed. nothing about authentication, so you are working on the wrong field. It means that the host key of the bitbucket.org is not in your ~/.ssh/known_hosts and your client does not have any way how to verify it. It was answered many times how to workaround it, but how to do it properly? There is section in the bitbucket manuals, describing how their public keys and fingerprint looks like. So: Run ssh bitbucket.org It will prompt you with one of the fingerprints: The authenticity of host 'bitbucket.org (104.192.143.3)' can't be established. RSA key fingerprint is SHA256:*****. Are you sure you want to continue connecting (yes/no)? You verify the fingerprint in the prompt is the same as on the bitbucket website: SHA256:zzXQOXSRBEiUtuE8AikJYKwbHaxvSc0ojez9YXaGp1A bitbucket.org (RSA) You write yes and press enter to verify the connection works. Or just copy the public key from the bitbucket website directly in the ~/.ssh/known_hosts file echo "bitbucket.org,104.192.143.1 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw==" >> ~/.ssh/known_hosts if nothing from the above helps, please run ssh -vvv bitbucket.org and post the output to the edited question.
As for php encryption, there are several encryption methods available. So don't limit yourself to CI's library. For example: http://php.net/manual/en/function.mcrypt-encrypt.php But as to your question, CI is released under the MIT license with the following limitation: /** * Copyright (c) 2014 - 2016, British Columbia Institute of Technology * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. */ This gives you the freedom to do anything about it except for the condition above. As to using the library: You may copy the library here: https://raw.githubusercontent.com/bcit-ci/CodeIgniter/develop/system/libraries/Encryption.php save as Enryption.php then create another php file: include_once('Encryption.php'); $key = 'ASDFJGARLKERKL'; function log_message($message) { error_log($message); } function config_item($what) { //just a placeholder... } $cipher = new CI_Encryption(); $cipher->initialize( [ 'driver'=>'openssl', 'key' => $key ] ); $plaintext = 'The quick brown fox'; $ciphertext = $cipher->encrypt($plaintext); echo 'ciphertext: ' . $ciphertext . "\n"; echo 'plaintext: ' . $cipher->decrypt($ciphertext) . "\n";
I think you should try creating a AuthenticationEntryPoint implementation with multiple landing page support. It could be something like this: import java.util.Map; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import org.springframework.security.core.AuthenticationException; import org.springframework.security.web.AuthenticationEntryPoint; import org.springframework.security.web.authentication.LoginUrlAuthenticationEntryPoint; import org.springframework.security.web.util.matcher.RegexRequestMatcher; import org.springframework.security.web.util.matcher.RequestMatcher; public class MultipleLandingPageEntryPoint extends LoginUrlAuthenticationEntryPoint implements AuthenticationEntryPoint { private Map<String, String> landingPages; public MultipleLandingPageEntryPoint(String defaultLoginFormUrl, Map<String, String> landingPages) { super(defaultLoginFormUrl); this.landingPages = landingPages; } public MultipleLandingPageEntryPoint(String defaultLoginFormUrl) { super(defaultLoginFormUrl); } public Map<String, String> getLandingPages() { return landingPages; } public void setLandingPages(Map<String, String> landingPages) { this.landingPages = landingPages; } @Override protected String determineUrlToUseForThisRequest(HttpServletRequest request, HttpServletResponse response, AuthenticationException exception) { for(String key : this.landingPages.keySet()){ RequestMatcher rm = new RegexRequestMatcher(key, null); if(rm.matches(request)){ return this.landingPages.get(key); } } // If not found in the map, return the default landing page through superclass return super.determineUrlToUseForThisRequest(request, response, exception); } } Then, in your security config, you must configure it: <beans:bean id="authenticationMultiEntryPoint" class="com.xxx.yyy.MultipleLandingPageEntryPoint"> <beans:constructor-arg value="/user/landing.htm" /> <beans:property name="landingPages"> <beans:map> <beans:entry key="/user**" value="/user/landing.htm" /> <beans:entry key="/admin**" value="/admin/landing.htm" /> </beans:map> </beans:property> </beans:bean> And use it in your <security:http> element: <security:http pattern="/admin/landing.htm" security="none" /> <security:http pattern="/user/landing.htm" security="none" /> <security:http auto-config="true" use-expressions="true" entry-point-ref="authenticationMultiEntryPoint"> If you implement the AuthenticationEntryPoint extending LoginUrlAuthenticationEntryPoint (which I think it's a good idea) check additional parameters on it. EDIT: I've just updated the class implementation, did not include the latest version
Unfortunately, you can only upload files using: Softlayer Object Storage Java Client (it's not possible to create objects), here an example for authentication, create container and upload a file using the client: package com.softlayer.objectstorage.main; import java.util.HashMap; import java.util.Map; import com.softlayer.objectstorage.Container; import com.softlayer.objectstorage.ObjectFile; public class ObjectStorage { String baseUrl; String user; String password; public ObjectStorage(String baseUrl, String user, String password){ this.baseUrl = baseUrl; this.user = user; this.password = password; } public void createContainer(String containerName){ try{ Container containerCreate = new Container(containerName, baseUrl, user, password, true); containerCreate.create(); }catch (Exception e) { System.out.println(e); } } public void UploadFile(String containerName, String fileName, String path) { try{ ObjectFile oFile = new ObjectFile(fileName, containerName, baseUrl, user, password, true); Map<String, String> tags = new HashMap<String, String>(); tags.put("testtag", "Test Value"); String newOb = oFile.uploadFile(path, tags); }catch (Exception e) { System.out.println(e); } } public static void main(String[] args) { /** * Define Object Storage's parameters */ String baseUrl = "https://dal05.objectstorage.softlayer.net/auth/v1.0/"; String user = "set me"; String password = "set me"; // Define the container name to create String containerName = "containerTest"; // Define the file name to create in the object storage String fileName = "newTest.txt"; // Define the location path from file that you wish to upload String pathFile = "C:\\Users\\Ruber Cuellar\\Documents\\test.txt"; // Create Object Storage connection ObjectStorage objectStorage = new ObjectStorage(baseUrl, user, password); // Create Container objectStorage.createContainer(containerName); // Upload file objectStorage.UploadFile(containerName, fileName, pathFile); } } I hope it helps, let me know any doubt or comment
Bernd, you have a rather large set of technology moving parts here :-). Let me pick them into pieces: Domino: you need something outside of Bluemix for storing the NSF, like a Softlayer Domino server. That will be key to the solution. mobile app: Cordova is right, but look one step further and have a look at Ionic. It uses Cordova under the hood. You can add it to your app as is, or use IBM Mobile first foundation Push notifications: there's a service for it in Bluemix Authentication: there's a service for it What I would do: on the Domino server holding the NSFs deploy a OSGi plugin you write extending Domino Access Services that reads/writes the data you are interested in JSON. Use the OpenNTF Domino Api (ODA) to make your life easier configure the server to only talk to Bluemix. I would use VPN technology for that - Bluemix has a service for that Now the fun part: configure Domino to accept the WAS headers for user identity. Securing Domino in the step before is ESSENTIAL since hitting it direct would now allow to spoof identity. This is why ONLY your Bluemix VPN shall hit it Now build your app layer in Bluemix using Liberty or Node.js (I would use Node.js since passport, a Node module, has the most authentication options) that handles auth using the Bluemix services and sets the header when talking to Domino Make sure you use a web worker in your mobile app to take the network out of the user experience That's roughly it. Hope it helps
'Instant' derives from the term 'real-time'; client side code can be used to send requests to the server. Instant messaging applications are complex to write from scratch - there are plenty of things to take into consideration: Member registration and login (Session authentication) Messaging authentication (Role based access control) Protecting against XSS attacks and similar jQuery is a free, open source, framework for JavaScript. You can use a CDN or download it. An example of sending a message would look something like this. $(document).ready(function() { var someSharedUniqueCode = 'Both the recipient and sender uses this key to communicate'; var message = 'This is an example message'; $.post('/messages/send/' + message, { communicationKey: someSharedUniqeCode }).done(function(chat) { if(chat.State) { $('#' + someSharedUniqueCode).append(chat.Message); } else { $('#' + someSharedUniqueCode).append('Some error message'); } }); Of course, you would need to set up ReWrite rules inside your .htaccess so you can write a controller for the request (MVC), something like this. Router::Map('POST', '/messages/send/[*:m]', function($message) { $authKey = $_POST['communicationKey']; if(!$authKey && !$message) { return; } // your code on error // TODO: Check user is logged in // TODO: Check user RBAC is able to communicate in this chat header('Content-Type: application/json'); $message = new SomeMessageController(); // create a controller and modules that access the database echo json_encode(array( 'State' => $message->setAuthentication($authkey) // example method ->parseMessage($message) // example method ->save() // example method ), true); }, 'Message Sender'); Remember all of the above information should be stored in the Database and a user should only be able to send messages when they are logged in so you'll need to add a controller to assure they are logged in. If you're unclear on MVC methodologies, here is an open source Router which contains a good set of documentation.
It is definitely not the secure way of hardcoding them and just placing them in an app. Actually its not that straight forward. I assume you created the client from artisan or from the pre-built Vue components. In either case there is more that you have to do in order so safely consume the oauth2 api without exposing any potential security vulnerabilities in your app. Assuming your mobile users would register from the mobile, you would need to create user and oAuth2 client from your mobile API that you will expose for your clients( mobile apps ) to consume. For this you have to do the following: After installing laravel passport perform the following artisan command php artisan migrate This will create the necessary tables to store oauth clients, their tokens and other related important information at db level. After this you would need to change client_id data type to VARCHAR(255) so as to store username as client_id instead of storing numeric client_ids. Now go to your models and create a model for oauth_clients table so that you can create client pragmatically from the code while creating users. <?php namespace App; use Illuminate\Database\Eloquent\Model; class oAuthClient extends Model { protected $table = 'oauth_clients'; } This will create a model class for you through which you can store oauth clients in the db while registering them in your app. Route::post('/register-user', function () { $email= \Illuminate\Support\Facades\Input::get('email'); $password=\Illuminate\Support\Facades\Input::get('password'); $user = new \App\User(array( 'name' =>\Illuminate\Support\Facades\Input::get('name'), 'email' => \Illuminate\Support\Facades\Input::get('email'), 'password' => bcrypt(\Illuminate\Support\Facades\Input::get('password')), )); $user->save(); $oauth_client=new \App\oAuthClient(); $oauth_client->user_id=$user->id; $oauth_client->id=$email; $oauth_client->name=$user->name; $oauth_client->secret=base64_encode(hash_hmac('sha256',$password, 'secret', true)); $oauth_client->password_client=1; $oauth_client->personal_access_client=0; $oauth_client->redirect=''; $oauth_client->revoked=0; $oauth_client->save(); return [ 'message' => 'user successfully created.' ]; }); This will generate an entry in user table and oauth_clients table which will be used by laravel passport to generate respective access_tokens for the user.In the above code snippet you have to note that to generate the oauth_client secret you have to use some strong formula of encryption that you feel comfortable using it with your application. Also use the same technique to generate the secret key on your mobile app for the respective client/user. Now you can use the standard POST API offered by laravel passport to request access token through password grant using "oauth/token" using the following parameters: grant_type : 'password' client_id : '<email with which the user is registered>' client_secret : '<generate the client secret from the mobile app>' username : '<email with which the user is registered>' password : '<password entered by the user>' scope : '<leave empty as default>' 5.The above will give you a response, if everything is correct, similar to : { "token_type": "Bearer", "expires_in": 3155673600, "access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImp0aSI6IjMwZmM0MDk1NWY5YjUwNDViOTUzNDlmZjc2M2ExNDUxOTAxZjc5YTA5YjE4OWM1MjEzOTJlZmNiMDgwOWQzMzQwM2ExZWI4ZmMyODQ1MTE3In0.eyJhdWQiOiJzaHVqYWhtQGdtYWlsLmNvbSIsImp0aSI6IjMwZmM0MDk1NWY5YjUwNDViOTUzNDlmZjc2M2ExNDUxOTAxZjc5YTA5YjE4OWM1MjEzOTJlZmNiMDgwOWQzMzQwM2ExZWI4ZmMyODQ1MTE3IiwiaWF0IjoxNDc4MTQ1NjMyLCJuYmYiOjE0NzgxNDU2MzIsImV4cCI6NDYzMzgxOTIzMiwic3ViIjoiMSIsInNjb3BlcyI6W119.dj3g9b2AdPCK-im5uab-01SP71S7AR96R0FQTKKoaZV7M5ID1pSXDlmZw96o5Bd_Xsy0nUqFsPNRQsLvYaOuHZsP8v9mOVirBXLIBvPcBc6lDRdNXvRidNqeh4JHhJu9a5VzNlJPm3joBYSco4wYzNHs2BPSxXuuD3o63nKRHhuUHB-HwjVxj2GDwzEYXdZmf2ZXOGRJ99DlWGDvWx8xQgMQtd1E9Xk_Rs6Iu8tycjBpKBaC24AKxMI6T8DpelnFmUbMcz-pRsgCWCF_hxv6FpXav3jr1CLhhT58_udBvXjQAXEbtHeB7W_oaMcaqezHdAeOWDcnqREZHsnXHtKt0JpymcTWBkS2cg7sJzy6P9mOGgQ8B4gb8wt44_kHTeWnokk4yPFRZojkHLVZb8YL6hZxLlzgV1jCHUxXoHNe1VKlHArdlV8LAts9pqARZkyBRfwQ8oiTL-2m16FQ_qGg-9vI0Suv7d6_W126afI3LxqDBi8AyqpQzZX1FWmuJLV0QiNM0nzTyokzz7w1ilJP2PxIeUzMRlVaJyA395zq2HjbFEenCkd7bAmTGrgEkyWM6XEq1P7qIC_Ne_pLNAV6DLXUpg9bUWEHhHPXIDYKHS-c3N9fPDt8UVvGI8n0rPMieTN92NsYZ_6OqLNpcm6TrhMNZ9eg5EC0IPySrrv62jE", "refresh_token": "BbwRuDnVfm7tRQk7qSYByFbQKK+shYPDinYA9+q5c/ovIE1xETyWitvq6PU8AHnI5FWb06Nl2BVoBwCHCUmFaeRXQQgYY/i5vIDEQ/TJYFLVPRHDc7CKILF0kMakWKDk7wJdl5J6k5mN38th4pAAZOubiRoZ+2npLC7OSZd5Mq8LCBayzqtyy/QA5MY9ywCgb1PErzrGQhzB3mNhKj7U51ZnYT3nS5nCH7iJkCjaKvd/Hwsx2M6pXnpY45xlDVeTOjZxxaOF/e0+VT2FP2+TZMDRfrSMLBEkpbyX0M/VxunriRJPXTUvl3PW0sVOEa3J7+fbce0XWAKz7PNs3+hcdzD2Av2VHYF7/bJwcDCO77ky0G4JlHjqC0HnnGP2UWI5qR+tCSBga7+M1P3ESjcTCV6G6H+7f8SOSv9FECcJ8J5WUrU+EHrZ95bDtPc9scE4P3OEQaYchlC9GHk2ZoGo5oMJI6YACuRfbGQJNBjdjxvLIrAMrB6DNGDMbH6UZodkpZgQjGVuoCWgFEfLqegHbp34CjwL5ZFJGohV+E87KxedXE6aEseywyjmGLGZwAekjsjNwuxqD2QMb05sg9VkiUPMsvn45K9iCLS5clEKOTwkd+JuWw2IU80pA24aXN64RvOJX5VKMN6CPluJVLdjHeFL55SB7nlDjp15WhoMU1A=" } You can use these token safely from your client apps ( mobile apps ). Hope it helps!.
The problem with Laravel 5.3 passport is that unlike previous OAuth 2.0 Server for Laravel library offered by lucadegasperi, it has no API to make clients directly. So as if now the client can only be made through the front-end. FYI we wanted to use laravel passport solely for our mobile app so while creating and registering user we would have only EMAIL & Password and in some cases only Facebook UserID for facebook sign-in. So the following approach worked pretty well for our case and might differ for your scenario but may help you in the longer term to play around with laravel passport. Note: Before following the below its assumed you have enabled Password Grant in your application. So the way we solved it for our project on laravel 5.3 is as follows: in the oauth_clients convert the id field into a normal field i.e. remove it as being primary key and make the data type as varchar so that we can store email address as client_ids as they are also unique for your system. Incase of Facebook login we store Facebook user IDs here in this column which again will be unique for each our client. Also for other tables like: oauth_access_tokens, oauth_auth_codes & oauth_personal_access_clients change client_id to VARCHAR(255) so that it can store email addresses or Facebook User IDs. Now go to your models and create a model for oauth_clients table so that you can create client programmatically from the code while creating users. <?php namespace App; use Illuminate\Database\Eloquent\Model; class OauthClient extends Model { protected $table = 'oauth_clients'; } Then in your api.php route file add the following route: Route::post('/register-user', function (Request $request) { $name = $request->input('name'); $email = $request->input('email'), $password = $request->input('password'), // save new user $user = \App\User::create([ 'name' => $name, 'email' => $email, 'password' => bcrypt($password), ]); // create oauth client $oauth_client = \App\OauthClient::create([ 'user_id' => $user->id, 'id' => $email, 'name' => $name, 'secret' => base64_encode(hash_hmac('sha256',$password, 'secret', true)), 'password_client' => 1, 'personal_access_client' => 0, 'redirect' => '', 'revoked' => 0, ]); return [ 'message' => 'user successfully created.' ]; }); In the above code snippet, you have to note that to generate the oauth_client secret you have to use some strong formula of encryption that you feel comfortable using it with your application. Also, use the same technique to generate the secret key on your mobile app for the respective client/user. Now you can use the standard POST API offered by laravel passport to request access token through password grant using "oauth/token" using the following parameters: grant_type : 'password' client_id : '<email with which the user is registered>' client_secret : '<generate the client secret from the mobile app>' username : '<email with which the user is registered>' password : '<password entered by the user>' scope : '<leave empty as default>' The above will give you a response, if everything is correct, similar to : { "token_type": "Bearer", "expires_in": 3155673600, "access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImp0aSI6IjMwZmM0MDk1NWY5YjUwNDViOTUzNDlmZjc2M2ExNDUxOTAxZjc5YTA5YjE4OWM1MjEzOTJlZmNiMDgwOWQzMzQwM2ExZWI4ZmMyODQ1MTE3In0.eyJhdWQiOiJzaHVqYWhtQGdtYWlsLmNvbSIsImp0aSI6IjMwZmM0MDk1NWY5YjUwNDViOTUzNDlmZjc2M2ExNDUxOTAxZjc5YTA5YjE4OWM1MjEzOTJlZmNiMDgwOWQzMzQwM2ExZWI4ZmMyODQ1MTE3IiwiaWF0IjoxNDc4MTQ1NjMyLCJuYmYiOjE0NzgxNDU2MzIsImV4cCI6NDYzMzgxOTIzMiwic3ViIjoiMSIsInNjb3BlcyI6W119.dj3g9b2AdPCK-im5uab-01SP71S7AR96R0FQTKKoaZV7M5ID1pSXDlmZw96o5Bd_Xsy0nUqFsPNRQsLvYaOuHZsP8v9mOVirBXLIBvPcBc6lDRdNXvRidNqeh4JHhJu9a5VzNlJPm3joBYSco4wYzNHs2BPSxXuuD3o63nKRHhuUHB-HwjVxj2GDwzEYXdZmf2ZXOGRJ99DlWGDvWx8xQgMQtd1E9Xk_Rs6Iu8tycjBpKBaC24AKxMI6T8DpelnFmUbMcz-pRsgCWCF_hxv6FpXav3jr1CLhhT58_udBvXjQAXEbtHeB7W_oaMcaqezHdAeOWDcnqREZHsnXHtKt0JpymcTWBkS2cg7sJzy6P9mOGgQ8B4gb8wt44_kHTeWnokk4yPFRZojkHLVZb8YL6hZxLlzgV1jCHUxXoHNe1VKlHArdlV8LAts9pqARZkyBRfwQ8oiTL-2m16FQ_qGg-9vI0Suv7d6_W126afI3LxqDBi8AyqpQzZX1FWmuJLV0QiNM0nzTyokzz7w1ilJP2PxIeUzMRlVaJyA395zq2HjbFEenCkd7bAmTGrgEkyWM6XEq1P7qIC_Ne_pLNAV6DLXUpg9bUWEHhHPXIDYKHS-c3N9fPDt8UVvGI8n0rPMieTN92NsYZ_6OqLNpcm6TrhMNZ9eg5EC0IPySrrv62jE", "refresh_token": "BbwRuDnVfm7tRQk7qSYByFbQKK+shYPDinYA9+q5c/ovIE1xETyWitvq6PU8AHnI5FWb06Nl2BVoBwCHCUmFaeRXQQgYY/i5vIDEQ/TJYFLVPRHDc7CKILF0kMakWKDk7wJdl5J6k5mN38th4pAAZOubiRoZ+2npLC7OSZd5Mq8LCBayzqtyy/QA5MY9ywCgb1PErzrGQhzB3mNhKj7U51ZnYT3nS5nCH7iJkCjaKvd/Hwsx2M6pXnpY45xlDVeTOjZxxaOF/e0+VT2FP2+TZMDRfrSMLBEkpbyX0M/VxunriRJPXTUvl3PW0sVOEa3J7+fbce0XWAKz7PNs3+hcdzD2Av2VHYF7/bJwcDCO77ky0G4JlHjqC0HnnGP2UWI5qR+tCSBga7+M1P3ESjcTCV6G6H+7f8SOSv9FECcJ8J5WUrU+EHrZ95bDtPc9scE4P3OEQaYchlC9GHk2ZoGo5oMJI6YACuRfbGQJNBjdjxvLIrAMrB6DNGDMbH6UZodkpZgQjGVuoCWgFEfLqegHbp34CjwL5ZFJGohV+E87KxedXE6aEseywyjmGLGZwAekjsjNwuxqD2QMb05sg9VkiUPMsvn45K9iCLS5clEKOTwkd+JuWw2IU80pA24aXN64RvOJX5VKMN6CPluJVLdjHeFL55SB7nlDjp15WhoMU1A=" } Its only a temporary solution till laravel supports an external API for applications which only has a mobile as the only possible interface for creating oAuth clients and user. Hope it helps you! Cheers.
If you mean data at app.config it is simple! You have to use these two classes: EntityConnectionStringBuilder https://msdn.microsoft.com/en-us/library/system.data.entityclient.entityconnectionstringbuilder(v=vs.110).aspx And SqlConnectionStringBuilder https://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlconnectionstringbuilder(v=vs.110).aspx I learn it from this page: Programmatic Connection Strings in Entity Framework 6 It is very good guide. In any cases, That link didn't help you!? Just Google something like this: C# define connection string at runtime After you put all connection string inside your code, you can go and delete any sensitive data from connectionStrings tag of app.config file because your app will not use it anymore! Then compile your code again. If you are using DB First in EF, then you can check this Guide too: How to set Connection String with Entity Framework UPDATED: I added two of my Classes that I manage and create connection string with them programmatic (Dynamic), One is belong to my Entity Framework project that I used SQL Server Compact Edition (SQL Server CE) and the second one belong to another Entity Framework Project That I used SQL Server Express 2014 with SQL Server authentication (used sa username). I will leave both method here in case anyone need them: This belong to my SQL Server CE project: public static string GetDBConnectionString(string dataParentPath = "") { EntityConnectionStringBuilder entityBuilder = new EntityConnectionStringBuilder(); SqlCeConnectionStringBuilder sqlCEBuilder = new SqlCeConnectionStringBuilder(); if (string.IsNullOrEmpty(dataParentPath) == true) dataParentPath = @"C:\MyDBFolder\CMS.sdf"; sqlCEBuilder.DataSource = dataParentPath; sqlCEBuilder.Password = "12345687"; sqlCEBuilder.MaxDatabaseSize = 4090; entityBuilder.Metadata = "res://*/CMS.csdl|res://*/CMS.ssdl|res://*/CMS.msl"; entityBuilder.ProviderConnectionString = sqlCEBuilder.ToString(); entityBuilder.Provider = "System.Data.SqlServerCe.4.0"; return entityBuilder.ToString(); } This belongs to my SQL Server Express project with SQL Server authentication: using System; using System.Collections.Generic; using System.Data.Entity.Core.EntityClient; using System.Data.SqlClient; using System.Linq; using System.Text; using System.Threading.Tasks; namespace CMS { class mySettings { public static string GetDBConnectionString() { // ************************************************** // This is my "ConnectionString" from App.config file. // <connectionStrings> // <add name="CMSEntities" // connectionString= // "metadata=res://*/CMS.csdl|res://*/CMS.ssdl|res://*/CMS.msl // ;provider=System.Data.SqlClient // ;provider connection string=&quot // ;data source=MY-PC\SQLEXPRESS // ;initial catalog=CMS // ;user id=sa // ;password=12345687 // ;MultipleActiveResultSets=True // ;App=EntityFramework // "" // providerName="System.Data.EntityClient" /> //</connectionStrings> // ************************************************** string metaData = "res://*/CMS.csdl|res://*/CMS.ssdl|res://*/CMS.msl"; string providerName = "System.Data.SqlClient"; string dataSource = @"MY-PC\SQLEXPRESS"; string databaseName = "CMS"; // = InitialCatalog string userID = "sa"; string password = "12345687"; string appName = "EntityFramework"; EntityConnectionStringBuilder entityBuilder = new EntityConnectionStringBuilder(); SqlConnectionStringBuilder sqlBuilder = new SqlConnectionStringBuilder(); // = = = = = = = = = = = = = = = = sqlBuilder.DataSource = dataSource; sqlBuilder.InitialCatalog = databaseName; sqlBuilder.MultipleActiveResultSets = true; sqlBuilder.UserID = userID; sqlBuilder.Password = password; sqlBuilder.ApplicationName = appName; // = = = = = = = = = = = = = = = = entityBuilder.Provider = providerName; entityBuilder.Metadata = metaData; entityBuilder.ProviderConnectionString = sqlBuilder.ConnectionString; return entityBuilder.ToString(); } } } As you can see, My database in both project have same name "CMS" so its Entities will be named "CMSEntities". Now! you have to override its DbContext constructor. It is Important but easiest part! Better description than mine is from this page "http://www.cosairus.com/Blog/2015/3/10/programmatic-connection-strings-in-entity-framework-6": Now your Entity Model extends from DbContext and DbContext provides a constructor to pass in a Connection String, but your Entity Model does not overload those constructors for you. In order to access the constructor overload, you will need to create a new class partial for your Entity Model database context in the same namespace as your Entity Model with the required constructor signature. Pro Tip: be sure to name the filename of the cs file a different name than the Entity Model database context in the event that future generated code does not overwrite your changes. So I build a class at root of my Project, The class must be partial: using System; using System.Collections.Generic; using System.Data.Entity; using System.Linq; using System.Text; using System.Threading.Tasks; namespace CMS // Your Project Namespace { public partial class CMSEntities : DbContext { public CMSEntities(string connectionString) : base(connectionString) { } } } and Anytime I wanna access to my Database I will use this code: using (CMSEntities db = new CMSEntities(CMSSettings.GetDBConnectionString())) { // Do your DB stuff here... } I hope It help you or others which I learn all of that from this site "stackoverflow" and users. Good Luck
You should use an existing library that supports the type of JWT signature you're using. For a quick reference on your available options for PHP check the Libraries section in jwt.io. Using an existing library is preferred in most situations, however, it's also important to do some assessment on the quality of the library. For JWT signature validation read this article (Critical vulnerabilities in JSON Web Token libraries) to ensure that your usage of the libraries does not lead to possible vulnerabilities. Update: The tokens are different because they are signed with different keys and the payload also differs; the iss in one is "https://lowie.eu.auth0.com/" and on the other is "https:\/\/lowie.eu.auth0.com\/". You can check that by decoding the payload with a Base64 decoder and look at the raw output. More importantly, you should not be creating any tokens, just validating that they are valid and were issued by the trusted issuer to which you delegated the actual authentication process.
In C, you cannot declare a function inside another function, like you did. Here is your code that will compile: #include <stdio.h> #include <stdlib.h> #include <strings.h> #include <ctype.h> #include <string.h> char *encryption (char []); void *decryption (char []); char alpha [26]={'a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z'}; char key[26]; char *encryption (char cipher_text[]) { int i, val, j; printf("enter the unique KEY of 26 character:"); scanf("%s", key); printf("\n abcdefghijklmnopqrstuvwxyz \n"); printf ("%s", key); for (i=0; i <strlen(cipher_text); i++) { for (j=0; j<26; j++) { if (alpha[j]== cipher_text[i]) { cipher_text[i]=key[j]; break; } } } printf ("your message enc: %s", cipher_text); return cipher_text; } int main () { int i, key, choice, flag=0; char *c_text, msg[255]; printf("\n Enter plain text:"); scanf ("%[^\n]", msg); encryption(msg); return 0; } How to generate random characters is answered here.
Unfortunately, this workflow isn't well supported by existing Apache NiFi processors. You could probably fashion a workflow that split the JSON content into attributes, split each attribute into the content of an individual flowfile, encrypted that content, merged the flowfiles back, and reconstituted the now-encrypted content into a attributes via UpdateAttribute. I have created a Jira for a new NiFi processor to make this much simpler. My recommendation until such time as that is available is to use the ExecuteScript processor to achieve this. I have provided a template with an example, which you can import directly into your NiFi instance and connect to your flow. The body of the ExecuteScript processor is provided below (you can see how I initialized the AES/GCM cipher, and change the algorithm, key, and IV to your desired values). import javax.crypto.Cipher import javax.crypto.SecretKey import javax.crypto.spec.IvParameterSpec import javax.crypto.spec.SecretKeySpec import java.nio.charset.StandardCharsets FlowFile flowFile = session.get() if (!flowFile) { return } try { // Get the raw values of the attributes String normalAttribute = flowFile.getAttribute('Normal Attribute') String sensitiveAttribute = flowFile.getAttribute('Sensitive Attribute') // Instantiate an encryption cipher // Lots of additional code could go here to generate a random key, derive a key from a password, read from a file or keyring, etc. String keyHex = "0123456789ABCDEFFEDCBA9876543210" // * 2 for 256-bit encryption SecretKey key = new SecretKeySpec(keyHex.getBytes(StandardCharsets.UTF_8), "AES") IvParameterSpec iv = new IvParameterSpec(keyHex[0..<16].getBytes(StandardCharsets.UTF_8)) Cipher aesGcmEncCipher = Cipher.getInstance("AES/GCM/NoPadding", "BC") aesGcmEncCipher.init(Cipher.ENCRYPT_MODE, key, iv) String encryptedNormalAttribute = Base64.encoder.encodeToString(aesGcmEncCipher.doFinal(normalAttribute.bytes)) String encryptedSensitiveAttribute = Base64.encoder.encodeToString(aesGcmEncCipher.doFinal(sensitiveAttribute.bytes)) // Add a new attribute with the encrypted normal attribute flowFile = session.putAttribute(flowFile, 'Normal Attribute (encrypted)', encryptedNormalAttribute) // Replace the sensitive attribute inline with the cipher text flowFile = session.putAttribute(flowFile, 'Sensitive Attribute', encryptedSensitiveAttribute) session.transfer(flowFile, REL_SUCCESS) } catch (Exception e) { log.error("There was an error encrypting the attributes: ${e.getMessage()}") session.transfer(flowFile, REL_FAILURE) }
There are various ways to do this but here is my recommendation // For ASP.Net MVC 5 simply inherit from AuthorizationAttribute and override the methods. public class AccessControlAttribute : Attribute, IAuthorizationFilter { private readonly Roles role; public AccessControlAttribute(Roles role) { this.role = role; } private Boolean AuthorizationCore(AuthorizationFilterContext context) { var username = context.HttpContext.Request.Cookies["loginCookie_username"]; var password = context.HttpContext.Request.Cookies["loginCookie_password"]; if (role == Roles.FakeFullAccess) { username = "FAKE"; goto final; } //In ASP.Net MVC 5 use Ninject for dependency injection and get the service using : [NinjectContext].GetKernel.Get<DbContext>(); DbContext db = (DbContext) context.HttpContext.RequestServices.GetService(typeof(DbContext)); if (username != null && password != null) { var findUser = db.Set<Login>().Find(username); if (findUser != null && findUser.Password.Equals(password) && findUser.RoleId == (int)role) { goto final; } } return false; final: { context.HttpContext.User.AddIdentity(new System.Security.Principal.GenericIdentity(username)); return true; } } private void HandleUnauthorizedRequest(AuthorizationFilterContext context) { context.Result = new RedirectToRouteResult(new { area = "", controller = "", action = "" }); } public void OnAuthorization(AuthorizationFilterContext context) { if (AuthorizationCore(context)) { // If using a combination of roles, you have to unmask it if (role == Roles.FakeFullAccess) { context.HttpContext.Request.Headers.Add("Render", "FakeAccess"); } else if (role == Roles.Admin) { context.HttpContext.Request.Headers.Add("Render", "AdminAccess"); } } else { HandleUnauthorizedRequest(context); } } } [Flags] public enum Roles { FakeFullAccess = 0, ReadOnly = 1, Admin = 2, Supervisor = 1 << 2, AnotherRole = 1 << 3 } in your view you can read the added header and customize the view (in ASP.Net Core there's no access to ControllerContext and ViewBag, if using ASP.Net MVC 5 you don't need to use the header trick) // For ASP.Net MVC 5 use the ViewBag or ViewData @Html.Partial(HttpContext.Request.Header["Render"]) //Assuming this renders the menu with proper functions. Now you have fully customizable role based authentication system with fake access for testing. Update: To consume the attribute do the following [AccessControl(Role.Admin)] public TestController: Controller { ... } // Dedicated for testing [AccessControl(Role.FakeAccess)] public PreviewController: TestCoontroller{} You can also combine roles if required like [AccessControl(Role.FakeAccess | Role.ReadOnly)] but you have to implement an unmasking method.
1) How I can get that same device ID in client side application? You can call the following REST endpoint in order to retrieve from the server various data about the application, including the deviceId: http://www.ibm.com/support/knowledgecenter/en/SSHS8R_8.0.0/com.ibm.worklight.apiref.doc/rest_runtime/r_restapi_push_device_registrations_get.html 2) User ID field also I can see in mobilefirst console device register information. How Can I add particular User ID while registering device? The sample uses the MobileFirst security framework, and that's where the userId is coming from. Please refer to the security documentation, tutorials and samples: https://mobilefirstplatform.ibmcloud.com/tutorials/en/foundation/8.0/authentication-and-security/ https://mobilefirstplatform.ibmcloud.com/tutorials/en/foundation/8.0/authentication-and-security/user-authentication/android/ The same userId is also used by the Push service by default (Push retrieves the user id from the request being made and if the user is already logged in, the userid part of the request). If there is no challenge handler in place the default user id would be anonymous.
When I use [Authorize(Policy = "Bearer")] I get Token authorisation and when I used [Authorize] I get identity authorisation, how can I combine both? [sic] Set the ActiveAuthenticationSchemes property. It takes a comma separated list of scheme names. Here is an example that activates the cookie middleware that Identity uses and the bearer (token) middleware. [Authorize(ActiveAuthenticationSchemes = "Bearer, Identity.Application")] Both the bearer and the cookie middleware will run and have a chance to create and append an identity for the current user. Remarks: You can activate whatever authentication schemes you need. The default scheme names are in the Identity and Authentication namespaces. E.g. Microsoft.AspNetCore.Authentication.JwtBearer .JwtBearerDefaults.AuthenticationScheme // "Bearer" Microsoft.AspNetCore.Identity .IdentityCookieOptions.ApplicationCookie // "Identity.Application" ... See also: Limiting Identity by Scheme JwtBearerDefaults IdentityCookieOptions
If you're in a situation where the Cognito Javascript SDK isn't going to work for your purposes, you can still see how it handles the refresh process in the SDK source: You can see in refreshSession that the Cognito InitiateAuth endpoint is called with REFRESH_TOKEN_AUTH set for the AuthFlow value, and an object passed in as the AuthParameters value. That object will need to be configured to suit the needs of your User Pool. Specifically, you may have to pass in your SECRET_HASH if your targeted App client id has an associated App client secret. User Pool Client Apps created for use with the Javascript SDK currently can't contain a client secret, and thus a SECRET_HASH isn't required to connect with them. Another caveat that might throw you for a loop is if your User Pool is set to remember devices, and you don't pass in the DEVICE_KEY along with your REFRESH_TOKEN. The Cognito API currently returns an "Invalid Refresh Token" error if you are passing in the RefreshToken without also passing in your DeviceKey. This error is returned even if you are passing in a valid RefreshToken. The thread linked above illuminates that, though I do hope AWS updates their error handling to be less cryptic in the future. As discussed in that thread, if you are using AdminInitiateAuth along with ADMIN_NO_SRP_AUTH, your successful authentication response payload does not currently contain NewDeviceMetadata; which means you won't have any DeviceKey to pass in as you attempt to refresh your tokens. My app calls for implementation in Python, so here's an example that worked for me: def refresh_token(self, username, refresh_token): try: return client.initiate_auth( ClientId=self.client_id, AuthFlow='REFRESH_TOKEN_AUTH', AuthParameters={ 'REFRESH_TOKEN': refresh_token, 'SECRET_HASH': self.get_secret_hash(username) # Note that SECRET_HASH is missing from JSDK # Note also that DEVICE_KEY is missing from my example } ) except botocore.exceptions.ClientError as e: return e.response
Original answer using git's start-ssh-agent Make sure you have Git installed and have git's cmd folder in your PATH. For example, on my computer the path to git's cmd folder is C:\Program Files\Git\cmd Make sure your id_rsa file is in the folder c:\users\yourusername\.ssh Restart your command prompt if you haven't already, and then run start-ssh-agent. It will find your id_rsa and prompt you for the passphrase Update 2019 - A better solution if you're using Windows 10: OpenSSH is available as part of Windows 10 which makes using SSH from cmd/powershell much easier in my opinion. It also doesn't rely on having git installed, unlike my previous solution. Open Manage optional features from the start menu and make sure you have Open SSH Client in the list. If not, you should be able to add it. Open Services from the start Menu Scroll down to OpenSSH Authentication Agent > right click > properties Change the Startup type from Disabled to any of the other 3 options. I have mine set to Automatic (Delayed Start) Open cmd and type where ssh to confirm that the top listed path is in System32. Mine is installed at C:\Windows\System32\OpenSSH\ssh.exe. If it's not in the list you may need to close and reopen cmd. Once you've followed these steps, ssh-agent, ssh-add and all other ssh commands should now work from cmd. To start the agent you can simply type ssh-agent. Optional step/troubleshooting: If you use git, you should set the GIT_SSH environment variable to the output of where ssh which you ran before (e.g C:\Windows\System32\OpenSSH\ssh.exe). This is to stop inconsistencies between the version of ssh you're using (and your keys are added/generated with) and the version that git uses internally. This should prevent issues that are similar to this Some nice things about this solution: You won't need to start the ssh-agent every time you restart your computer Identities that you've added (using ssh-add) will get automatically added after restarts. (It works for me, but you might possibly need a config file in your c:\Users\User\.ssh folder) You don't need git! You can register any rsa private key to the agent. The other solution will only pick up a key named id_rsa Hope this helps
Using promises : .factory('AuthenticationService', ['Base64', '$http', '$cookieStore', '$rootScope', '$q', function (Base64, $http, $cookieStore, $rootScope, $q) { var service = {}; service.Login = function (username, password, callback) { var deferred = $q.defer(); var authdata = Base64.encode(username + ':' + password); $rootScope.globals = { currentUser: { username: username, authdata: authdata } }; $http.defaults.headers.common['Authorization'] = 'Basic ' + authdata; $cookieStore.put('globals', $rootScope.globals); $http.post('http://localhost:8080/v1/login', { username: username, password: password }) .then(function (response) { deferred.resolve(response); }, function(error) { deferred.reject(error); }); return deferred.promise; }; service.ClearCredentials = function () { $rootScope.globals = {}; $cookieStore.remove('globals'); $http.defaults.headers.common.Authorization = 'Basic '; }; return service; }]) And your controller .controller('LoginController', ['$scope', '$rootScope', '$location', 'AuthenticationService', function ($scope, $rootScope, $location, AuthenticationService) { // reset login status AuthenticationService.ClearCredentials(); $scope.login = function () { $scope.dataLoading = true; AuthenticationService.Login($scope.username, $scope.password) .then(function(success) { $location.path('/'); }, function(error) { $scope.error= response.message; $scope.dataLoading = false; }); }; }]);
Figure out what happens, respectively what should happen in the different situations, and create tests with proper expectations. Like on successful login, the user data is being set in the auth storage and a redirect header is being set, that's something you could test. Likewise on a non-successful login attempt, no user data is stored, no redirect header is being set, and a flash message is being rendered. All these things can easily be checked in a controller integration test using either helper assertion methods, or even manually via the provided session and response objects, check: $_requestSession $_response assertSession() assertRedirect() assertRedirectContains() assertResponse() assertResponseContains() etc... Here's two very basic examples: namespace App\Test\TestCase\Controller; use Cake\TestSuite\IntegrationTestCase; class AccountControllerTest extends IntegrationTestCase { public function testLoginOk() { $this->enableCsrfToken(); $this->enableSecurityToken(); $this->post('/account/login', [ 'username' => 'the-username', 'password' => 'the-password' ]); $expected = [ 'id' => 1, 'username' => 'the-username' ]; $this->assertSession($expected, 'Auth.User'); $expected = [ 'controller' => 'Dashboard', 'action' => 'index' ]; $this->assertRedirect($expected); } public function testLoginFailure() { $this->enableCsrfToken(); $this->enableSecurityToken(); $this->post('/account/login', [ 'username' => 'wrong-username', 'password' => 'wrong-password' ]); $this->assertNull($this->_requestSession->read('Auth.User')); $this->assertNoRedirect(); $expected = __d('cockpit', 'Your username or password is incorrect'); $this->assertResponseContains($expected); } } See also Cookbook > Testing > Controller Integration Testing Cookbook > Testing > Controller Integration Testing > Testing Actions That Require Authentication Cookbook > Testing > Controller Integration Testing > Assertion methods
SimpleCov is a code coverage tool which is intended to be run on your local machine or a CI such as Travis CI. It should not be run on Heroku which is for production or staging. You should place simple_cov and any test related gems in the test group of your gemfile: group :test do gem 'simplecov', '~> 0.12.0' end Run bundle to regenerate the Gemfile.lock and commit the result. Redeploy the application to Heroku by pushing the changes. Update Your Gemfile has gem 'codeclimate-test-reporter' outside the test group. Which is causing this error. You also have listen which is also a tool which is not suited for production. All the gems that are required in all environments should be placed at the top of the Gemfile, then list the groups. Prefer placing gems in group blocks over using the group option. In general be more careful when adding dependencies and don't let your Gemfile become a mess because thats how you get these issues in the first place. source 'https://rubygems.org' ruby "2.3.0" gem 'rails', '4.2.5.1' gem 'tzinfo-data', platforms: [:mingw, :mswin, :x64_mingw, :jruby] # Only needed on Windows and jRuby gem 'puma' # You should have a version constraint here!!! ## == DB/ORM ===== gem 'pg' # You should have a version constraint here!!! gem "has_permalink" #gem 'delayed_job_active_record' ## == Authentication ==== gem 'devise' gem 'bcrypt', '~> 3.1.10' ## == Front-End ==== # Use jquery as the JavaScript library gem 'jquery-rails' gem 'jquery-ui-rails' gem 'uglifier', '>= 1.3.0' gem 'bootstrap-sass', '~> 3.3.6' gem 'sass-rails', '>= 3.2' # ---- gem 'sprockets-rails' not needed since about rails 4.0 gem 'bootstrap-select-rails' # Turbolinks makes following links in your web application faster. Read more: https://github.com/rails/turbolinks gem 'turbolinks' gem 'momentjs-rails', '>= 2.9.0' gem 'bootstrap3-datetimepicker-rails', '~> 4.17.42' gem 'bootstrap-wysihtml5-rails', github: 'nerian/bootstrap-wysihtml5-rails' gem 'bourbon' gem 'neat' gem 'font-awesome-rails' gem 'wicked' ## == Image uploads ==== gem 'carrierwave' gem 'rmagick' ## == API's ==== gem 'mandrill-api' # can most likely be removed as its a dependency of one of your gems. gem 'fog' gem 'stripe' ## == Misc ==== gem 'will_paginate' gem 'will_paginate-bootstrap' #gem 'sorcery' group :development, :test do gem 'mailcatcher' # Don't add to gemfile. Read the readme gem 'dotenv-rails' gem 'byebug' gem 'spring' # rspec-rails depends on rspec so you dont need to list it # it should be in the development group as well so that the generators work. gem 'rspec-rails' gem 'therubyracer', :platforms => :ruby # heroku has its own JS runtime. end group :test do gem 'rspec-instafail', require: false gem 'guard-rspec', require: false gem 'vcr' gem 'capybara' gem 'launchy' gem 'selenium-webdriver' gem 'simplecov', '~> 0.12.0' gem 'webmock', '~> 1.21.0' # don't use in development! gem 'database_cleaner', '~> 1.5.0' # don't use in development! gem 'codeclimate-test-reporter' # This was the gem that was breaking your development server. end group :production do gem 'rails_12factor' end
There are a few gotchas on the way. You can find all the needed info on stackoverflow. I have gathered all the info in this answer, for convenience. Things to be noticed I assume android kitkat and above. The intent for incomming sms is "android.provider.Telephony.SMS_RECEIVED" You can change the priority of the intent filter, but it's not necessary. You need this permission "android.permission.RECEIVE_SMS" in manifest xml, in order to receive sms messages. In android 6 and above, you additionally need to ask for the permission in runtime. You do not need to set the MIME type of data in the intent filter. Intent filter should pass only on empty data if no MIME type is set, but fortunately it will still work without MIME. adb shell am broadcast will not work. Use telnet connection to simulator to test sms receiving. Long sms messages are divided into small sms chunks. We need to concatenate them. How to send a sms message to the emulator The most important thing is to have the possibility to send fake sms messages to the device, so we can test the code. For this we will use a virtual device and a telnet connection to it. Create a virtual device in android studio and run the simulator Look at the title bar in the simulator window. There is the device name and a port number. We need to know this port number in the next steps. Now connect to the port number shown in the simulator title bar with telnet $ telnet localhost 5554 If you see this: Android Console: Authentication required, then you need to authenticate the connection with this command: auth xxxxxx Replace the xxxxxx above with the token read from ~/.emulator_console_auth_token file. Now you should be able to run all the commands. To send a sms message, type this command: sms send 555 "This is a message" Where you can replace 555 with the sender telephone number and a message of your own. How to listen to SMS_RECEIVED broadcasts To get the broadcasts, you need to register a BroadcastReceiver object. You can do this in the manifest.xml OR just call registerReceiver function. I will show you the latter, as it is easier to reason about and yet more flexible. Connecting the broadcast receiver with the main activity The data flow is one way. From broadcast receiver to the main activity. So the simplest way to get them to talk is to use a function interface. The activity will implement such a function and the broadcast receiver will have the activity instance passed as a parameter in the constructor. File SmsHandler.java: package ... interface SmsHandler { void handleSms(String sender, String message); } Implementing the broadcast receiver The broadcast receiver will get the intent in a callback. We will use the function Telephony.Sms.Intents.getMessagesFromIntent(intent) to get the sms messages. Notice the SmsHandler parameter in the constructor. It will be the activity to which we will send the received sms. File SmsInterceptor.java: package ... import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; import android.provider.Telephony; import android.telephony.SmsMessage; public class SmsInterceptor extends BroadcastReceiver { private SmsHandler handler; /* Constructor. Handler is the activity * * which will show the messages to user. */ public SmsInterceptor(SmsHandler handler) { this.handler = handler; } @Override public void onReceive(Context context, Intent intent) { /* Retrieve the sms message chunks from the intent */ SmsMessage[] rawSmsChunks; try { rawSmsChunks = Telephony.Sms.Intents.getMessagesFromIntent(intent); } catch (NullPointerException ignored) { return; } /* Gather all sms chunks for each sender separately */ Map<String, StringBuilder> sendersMap = new HashMap<>(); for (SmsMessage rawSmsChunk : rawSmsChunks) { if (rawSmsChunk != null) { String sender = rawSmsChunk.getDisplayOriginatingAddress(); String smsChunk = rawSmsChunk.getDisplayMessageBody(); StringBuilder smsBuilder; if ( ! sendersMap.containsKey(sender) ) { /* For each new sender create a separate StringBuilder */ smsBuilder = new StringBuilder(); sendersMap.put(sender, smsBuilder); } else { /* Sender already in map. Retrieve the StringBuilder */ smsBuilder = sendersMap.get(sender); } /* Add the sms chunk to the string builder */ smsBuilder.append(smsChunk); } } /* Loop over every sms thread and concatenate the sms chunks to one piece */ for ( Map.Entry<String, StringBuilder> smsThread : sendersMap.entrySet() ) { String sender = smsThread.getKey(); StringBuilder smsBuilder = smsThread.getValue(); String message = smsBuilder.toString(); handler.handleSms(sender, message); } } } The main activity Finally we need to implement SmsHandler interface into the main activity and add registering the broadcast receiver and permission check to the onCreate function. File MainActivity.java: package ... import ... public class MainActivity extends AppCompatActivity implements SmsHandler { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); /* Register the broadcast receiver */ registerSmsListener(); /* Make sure, we have the permissions */ requestSmsPermission(); } /* This function will be called by the broadcast receiver */ @Override public void handleSms(String sender, String message) { /* Here you can display the message to the user */ } private void registerSmsListener() { IntentFilter filter = new IntentFilter(); filter.addAction("android.provider.Telephony.SMS_RECEIVED"); /* filter.setPriority(999); This is optional. */ SmsInterceptor receiver = new SmsInterceptor(this); registerReceiver(receiver, filter); } private void requestSmsPermission() { String permission = Manifest.permission.RECEIVE_SMS; int grant = ContextCompat.checkSelfPermission(this, permission); if ( grant != PackageManager.PERMISSION_GRANTED) { String[] permission_list = new String[1]; permission_list[0] = permission; ActivityCompat.requestPermissions(this, permission_list, 1); } } } Finally remember to add RECEIVE_SMS permission to your manifest xml <?xml version="1.0" encoding="utf-8"?> <manifest ...> <uses-permission android:name="android.permission.RECEIVE_SMS"/> <application> ... </application> </manifest>
You should not be using the bcrypt hash output as an encryption key; it is not meant to be key material: BCrypt is not a key-derivation function BCrypt it is a password storage function You have an elliptic curve private key that you want to encrypt using a user's password. Of course you don't want to use the password directly - you want to use the password to derive an encryption key. For that you can use: PBKDF2 scrypt These are both key-derivation functions (e.g. password-based key derivation function). Their purpose is to generate an encryption key given a password. They are designed to be "hard". You feed both these algorithms: a password cost parameters salt desired number of bytes (e.g. 32 ==> 32 bytes ==> 256 bits) and it returns you a 256-bit key you can use as an encryption key to AES-256. You then want to backup the user's key I gather that you then want to: store the encrypted elliptic curve private key on your server store a hash of their password on your server And your question was: since you already ran their password through "a hashing funtion" can't you just use that hash as their stored password? No! That hash is also the encryption key protecting their private key. You don't want that private key transmitted anywhere. You don't want it existing anywhere. That 32-byte encryption key should be wiped from memory as soon as you're done with it. What you should do, if you also wish to store a hash of the user's password is use an algorithm that is typically used for password storage: pbkdf2 (a key-derivation function abused into password storage) bcrypt (better than pbkdf2) scrypt (a key-derivation function abused into password storage; better than bcrypt) argon2 (better than scrypt) Update: Argon2/Argon2i/Argon2d/Argon2id is weaker than bcrypt for password authentication (read more) You should separately run the user's password through one of these password storage algorithms. If you have access to bcrypt; use that over pbkdf2. If you have scrypt, use that for both: derivation of an encryption key hashing of the password The security of your system comes from (in addition to the secrecy of the password), the computational distance between the user's password and the encryption key protecting their private key: "hunter2" --PBKDF2--> Key material "hunter2" ---------bcrypt-------> Key material "hunter2" ----------------scrypt----------> Key material You want as much distance between the password and the key. Not-recommended cheat If you're really desperate to save CPU cycles (and avoid computing scrypt twice), you technically could take: Key Material ---SHA2---> "hashed password" And call the hash of the encryption key your "hashed password" and store that. Computation of a single SHA2 is negligible. This is acceptable because the only way an attacker can use this is by trying to guess every possible 256-bit encryption key - which is the problem they can't solve in the first place. There's no way to bruteforce a 256-bit key. And if they were to try to brute-force it, the extra hashed version doesn't help them, as they could just test their attempt by trying to decrypt the private key. But it's much less desirable because you're storing (a transformed) version of the encryption key. You want that key (and any transformed versions of it) stored as little as possible. To sum up generate EC key pair encryptionKey = scryptDeriveBytes(password, salt, cost, 32) encryptedPrivateKey = AES256(privateKey, encryptionKey) passwordHash = scryptHashPassword(password, salt, cost) and upload encryptedPrivateKey passwordhash
I was able to get it to work. A few issues got into the way. First, you have to allow IOS to accept self signed certificates. This requires to set up AlamoFire serverTrustPolicy: let serverTrustPolicies: [String: ServerTrustPolicy] = [ "your-domain.com": .disableEvaluation ] self.sessionManager = Alamofire.SessionManager( serverTrustPolicyManager: ServerTrustPolicyManager(policies: serverTrustPolicies) ) From there, you have to override the sessionDidRecieveChallenge to send the client certificate. Because i wanted to use a p12 file I modified some code I found elsewhere (sorry i don't have the source anymore) to make is Swift 3.0 to import the p12 using foundation classes: import Foundation public class PKCS12 { var label:String? var keyID:Data? var trust:SecTrust? var certChain:[SecTrust]? var identity:SecIdentity? let securityError:OSStatus public init(data:Data, password:String) { //self.securityError = errSecSuccess var items:CFArray? let certOptions:NSDictionary = [kSecImportExportPassphrase as NSString:password as NSString] // import certificate to read its entries self.securityError = SecPKCS12Import(data as NSData, certOptions, &items); if securityError == errSecSuccess { let certItems:Array = (items! as Array) let dict:Dictionary<String, AnyObject> = certItems.first! as! Dictionary<String, AnyObject>; self.label = dict[kSecImportItemLabel as String] as? String; self.keyID = dict[kSecImportItemKeyID as String] as? Data; self.trust = dict[kSecImportItemTrust as String] as! SecTrust?; self.certChain = dict[kSecImportItemCertChain as String] as? Array<SecTrust>; self.identity = dict[kSecImportItemIdentity as String] as! SecIdentity?; } } public convenience init(mainBundleResource:String, resourceType:String, password:String) { self.init(data: NSData(contentsOfFile: Bundle.main.path(forResource: mainBundleResource, ofType:resourceType)!)! as Data, password: password); } public func urlCredential() -> URLCredential { return URLCredential( identity: self.identity!, certificates: self.certChain!, persistence: URLCredential.Persistence.forSession); } } This will allow me to import the file, and send it back to the client. let cert = PKCS12.init(mainBundleResource: "cert", resourceType: "p12", password: "password"); self.sessionManager.delegate.sessionDidReceiveChallenge = { session, challenge in if challenge.protectionSpace.authenticationMethod == NSURLAuthenticationMethodClientCertificate { return (URLSession.AuthChallengeDisposition.useCredential, self.cert.urlCredential()); } if challenge.protectionSpace.authenticationMethod == NSURLAuthenticationMethodServerTrust { return (URLSession.AuthChallengeDisposition.useCredential, URLCredential(trust: challenge.protectionSpace.serverTrust!)); } return (URLSession.AuthChallengeDisposition.performDefaultHandling, Optional.none); } Now you can use the sessionManager to create as many calls as you need to. As a note, i've also added the following to the info.plist as recomended to get around the new security features in newer iOS features: <key>NSAppTransportSecurity</key> <dict> <key>NSAllowsArbitraryLoads</key> <true/> <key>NSExceptionDomains</key> <dict> <key>your-domain.com</key> <dict> <key>NSIncludesSubdomains</key> <true/> <key>NSExceptionRequiresForwardSecrecy</key> <false/> <key>NSExceptionAllowsInsecureHTTPLoads</key> <true/> </dict> </dict> </dict> I hope this helps!
I Have modified your code, try this let ext = "jpg" let imageURL = NSBundle.mainBundle().URLForResource("imagename", withExtension: ext) print("imageURL:\(imageURL)") let uploadRequest = AWSS3TransferManagerUploadRequest() uploadRequest.body = imageURL uploadRequest.key = "\(NSProcessInfo.processInfo().globallyUniqueString).\(ext)" uploadRequest.bucket = S3BucketName uploadRequest.contentType = "image/\(ext)" let transferManager = AWSS3TransferManager.defaultS3TransferManager() transferManager.upload(uploadRequest).continueWithBlock { (task) -> AnyObject! in if let error = task.error { print("Upload failed ❌ (\(error))") } if let exception = task.exception { print("Upload failed ❌ (\(exception))") } if task.result != nil { let s3URL = NSURL(string: "http://s3.amazonaws.com/\(self.S3BucketName)/\(uploadRequest.key!)")! print("Uploaded to:\n\(s3URL)") } else { print("Unexpected empty result.") } return nil } or you can use my code below to upload to AWS s3, its worked fine for me. This code is written in swift 3. func uploadButtonPressed(_ sender: AnyObject) { if documentImageView.image == nil { // Do something to wake up user :) } else { let image = documentImageView.image! let fileManager = FileManager.default let path = (NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0] as NSString).appendingPathComponent("\(imageName!).jpeg") let imageData = UIImageJPEGRepresentation(image, 0.99) fileManager.createFile(atPath: path as String, contents: imageData, attributes: nil) let fileUrl = NSURL(fileURLWithPath: path) var uploadRequest = AWSS3TransferManagerUploadRequest() uploadRequest?.bucket = "BucketName" uploadRequest?.key = "key.jpeg" uploadRequest?.contentType = "image/jpeg" uploadRequest?.body = fileUrl as URL! uploadRequest?.serverSideEncryption = AWSS3ServerSideEncryption.awsKms uploadRequest?.uploadProgress = { (bytesSent, totalBytesSent, totalBytesExpectedToSend) -> Void in DispatchQueue.main.async(execute: { self.amountUploaded = totalBytesSent // To show the updating data status in label. self.fileSize = totalBytesExpectedToSend }) } let transferManager = AWSS3TransferManager.default() transferManager?.upload(uploadRequest).continue(with: AWSExecutor.mainThread(), withSuccessBlock: { (taskk: AWSTask) -> Any? in if taskk.error != nil { // Error. } else { // Do something with your result. } return nil }) } } Thanks :)
If you follow all steps of adding a custom field to user, you will finish the tasks successfully. Here is all steps to add a custom field to user: Create an ASP.NET Web Application Make sure you select MVC and the Authentication is Individual User Accounts Go to Models folder β†’ Open IdentityModels.cs β†’ ApplicationUser class and add the property: public string Code { get; set; } Build the project Go to TOOLS menu β†’ Nuget Package Manager β†’ click Package Manager Console Type Enable-Migrations and press Enter and wait until the task get completed. You will see a response which says: Checking if the context targets an existing database... Code First Migrations enabled for project WebApplication1. Type Add-Migration "Code" and press Enter and wait until the task get completed. You will see a response which says: Scaffolding migration 'Code'. The Designer Code for this migration file includes a snapshot of your current Code First model. This snapshot is used to calculate the changes to your model when you scaffold the next migration. If you make additional changes to your model that you want to include in this migration, then you can re-scaffold it by running 'Add-Migration Code' again. Type Update-Database and press Enter and wait until the task get completed. You will see a response which says: Specify the '-Verbose' flag to view the SQL statements being applied to the target database. Applying explicit migrations: [201611132135242_Code]. Applying explicit migration: 201611132135242_Code. Running Seed method. At this step if you refresh SQL Server Object Explorer and go to database and see tables, under dbo.AspNetUsers under columns, you will see the Code field. If you didn't know which database or even which server you should look for, open Web.Config file and take a look at connection string which is something like this: <add name="DefaultConnection" connectionString="Data Source=(LocalDb)\v11.0;AttachDbFilename=|DataDirectory|\aspnet-WebApplication1-20161114125903.mdf;Initial Catalog=aspnet-WebApplication1-20161114125903;Integrated Security=True" providerName="System.Data.SqlClient" /> You can see data source (which is sql server instance) and something .mdf which is database name. Go to Models folder β†’ Open AccountViewModels.cs file β†’ RegisterViewModel class and add this property: (In APIv2 with EF6, you can add the below line in Models folder β†’ AccountBindingModels file β†’ RegisterBindingModel class) public string Code { get; set; } Go to Views folder β†’ Account folder β†’ Open Register.cshtml file and add this code near other fields, for example below password: <div class="form-group"> @Html.LabelFor(m => m.Code, new { @class = "col-md-2 control-label" }) <div class="col-md-10"> @Html.TextBoxFor(m => m.Code, new { @class = "form-control" }) </div> </div> Go to Controllers folder β†’ Open AccountController.cs file β†’ in http post Register action, change the line which creates user to this: var user = new ApplicationUser { UserName = model.Email, Email = model.Email, Code= model.Code }; Run project and go to /Account/Register url and register a new user. After registering the user, if you go to database again and View Data of dbo.AspNetUsers table, you will see the code has been saved. Download You can clone or download a working example here: r-aghaei/AddPropertyToIdentityUserExample Further reading - How to Add a custom Property to IdentityRole? If you are interested to know how to add a new property to IdentityRole, take a look at How to Add a custom Property to IdentityRole?
Below is a working solution. resource/alfresco/extension/new-user-email-context.xml: <?xml version='1.0' encoding='UTF-8'?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd"> <bean id="newUserEmail" class="demo.NewUserEmail"> <property name="policyComponent" ref="policyComponent"/> <property name="nodeService" ref="nodeService"/> <property name="personService" ref="personService"/> <property name="passwordGenerator" ref="passwordGenerator"/> <property name="authenticationService" ref="authenticationService"/> </bean> </beans> demo.NewUserEmail.java: package demo; import org.alfresco.model.ContentModel; import org.alfresco.repo.node.NodeServicePolicies; import org.alfresco.repo.policy.*; import org.alfresco.repo.security.authentication.PasswordGenerator; import org.alfresco.service.cmr.repository.*; import org.alfresco.service.cmr.security.*; import org.alfresco.util.PropertyCheck; import org.springframework.beans.factory.InitializingBean; public class NewUserEmail implements NodeServicePolicies.OnCreateNodePolicy, InitializingBean { @Override public void onCreateNode(ChildAssociationRef childAssocRef) { notifyUser(childAssocRef); } private void notifyUser(ChildAssociationRef childAssocRef) { NodeRef personRef = childAssocRef.getChildRef(); // get the user name String username = (String) this.nodeService.getProperty( personRef, ContentModel.PROP_USERNAME); // generate the new password (Alfresco's rules) String newPassword = passwordGenerator.generatePassword(); // set the new password authenticationService.setAuthentication(username, newPassword.toCharArray()); // send default notification to the user personService.notifyPerson(username, newPassword); } private PolicyComponent policyComponent; private NodeService nodeService; private PersonService personService; private PasswordGenerator passwordGenerator; private MutableAuthenticationService authenticationService; public void setPolicyComponent(PolicyComponent policyComponent) { this.policyComponent = policyComponent; } public void setNodeService(NodeService nodeService) { this.nodeService = nodeService; } public void setPersonService(PersonService personService) { this.personService = personService; } public void setPasswordGenerator(PasswordGenerator passwordGenerator) { this.passwordGenerator = passwordGenerator; } public void setAuthenticationService(AuthenticationService authenticationService) { if (authenticationService instanceof MutableAuthenticationService) { this.authenticationService = (MutableAuthenticationService) authenticationService; } } @Override public void afterPropertiesSet() throws Exception { PropertyCheck.mandatory(this, "policyComponent", policyComponent); PropertyCheck.mandatory(this, "nodeService", nodeService); PropertyCheck.mandatory(this, "passwordGenerator", passwordGenerator); PropertyCheck.mandatory(this, "authenticationService", authenticationService); PropertyCheck.mandatory(this, "personService", personService); this.policyComponent.bindClassBehaviour( NodeServicePolicies.OnCreateNodePolicy.QNAME, ContentModel.TYPE_PERSON, new JavaBehaviour(this, NodeServicePolicies.OnCreateNodePolicy.QNAME.getLocalName(), Behaviour.NotificationFrequency.TRANSACTION_COMMIT ) ); } }
If you want to use graph API to get User info. You need to add token to your request header like following: client.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("bearer", TokenForUser); Here is the code snippet that could help to list User info, hope it could give you some tips: string AuthString = "https://login.microsoftonline.com/"; string ResourceUrl = "https://graph.windows.net"; string ClientId = "***"; var redirectUri = new Uri("https://localhost"); string TenantId = "e4162ad0-e9e3-4a16-bf40-0d8a906a06d4"; AuthenticationContext authenticationContext = new AuthenticationContext(AuthString+TenantId, false); AuthenticationResult userAuthnResult = await authenticationContext.AcquireTokenAsync(ResourceUrl, ClientId, redirectUri, new PlatformParameters(PromptBehavior.RefreshSession)); TokenForUser = userAuthnResult.AccessToken; var client = new HttpClient(); var uri = $"https://graph.windows.net/{TenantId}/users?api-version=1.6"; client.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("bearer", TokenForUser); var response = await client.GetAsync(uri); if (response.Content != null) { var responseString = await response.Content.ReadAsStringAsync(); Console.WriteLine(responseString); } We could find ClientId, RedirectURi, tenantId, ResourceUrl in Azure AD native application:
After exhausting the current spectrum of available options from Javascript I decided to simply implement certificate pinning natively it all seems so simple now that I'm done. Skip to headers titled Android Solution and IOS Solution if you don't want to read through the process of reaching the solution. Android Following Kudo's recommendation I thought out to implement pinning using okhttp3. client = new OkHttpClient.Builder() .certificatePinner(new CertificatePinner.Builder() .add("publicobject.com", "sha1/DmxUShsZuNiqPQsX2Oi9uv2sCnw=") .add("publicobject.com", "sha1/SXxoaOSEzPC6BgGmxAt/EAcsajw=") .add("publicobject.com", "sha1/blhOM3W9V/bVQhsWAcLYwPU6n24=") .add("publicobject.com", "sha1/T5x9IXmcrQ7YuQxXnxoCmeeQ84c=") .build()) .build(); I first started by learning how to create a native android bridge with react nativecreating a toast module. I then extended it with a method for sending a simple request @ReactMethod public void showURL(String url, int duration) { try { Request request = new Request.Builder() .url(url) .build(); Response response = client.newCall(request).execute(); Toast.makeText(getReactApplicationContext(), response.body().string(), duration).show(); } catch (IOException e) { Toast.makeText(getReactApplicationContext(), e.getMessage(), Toast.LENGTH_SHORT).show(); } } Succeeding in sending a request I then turned to sending a request pinned. I used these packages in my file import com.facebook.react.bridge.NativeModule; import com.facebook.react.bridge.ReactApplicationContext; import com.facebook.react.bridge.ReactContext; import com.facebook.react.bridge.ReactContextBaseJavaModule; import com.facebook.react.bridge.ReactMethod; import com.facebook.react.bridge.Callback; import okhttp3.OkHttpClient; import okhttp3.Request; import okhttp3.Response; import okhttp3.CertificatePinner; import java.io.IOException; import java.util.Map; import java.util.HashMap; Kudo's approach wasn't clear on where I would get the public keys or how to generate them. luckily okhttp3 docs in addition to providing a clear demonstration of how to use the CertificatePinner stated that to get the public keys all I would need to do is send a request with an incorrect pin, and the correct pins will appear in the error message. After taking a moment to realise that OkHttpClent.Builder() can be chained and I can include the CertificatePinner before the build, unlike the misleading example in Kudo's proposal (probably and older version) I came up with this method. @ReactMethod public void getKeyChainForHost(String hostname, Callback errorCallbackContainingCorrectKeys, Callback successCallback) { try { CertificatePinner certificatePinner = new CertificatePinner.Builder() .add(hostname, "sha256/AAAAAAAAAAAAAAAAAAAAAAAAAAA=") .build(); OkHttpClient client = (new OkHttpClient.Builder()).certificatePinner(certificatePinner).build(); Request request = new Request.Builder() .url("https://" + hostname) .build(); Response response =client.newCall(request).execute(); successCallback.invoke(response.body().string()); } catch (Exception e) { errorCallbackContainingCorrectKeys.invoke(e.getMessage()); } } Then replacing the public keychains I got in the error yielded back the page's body, indicating I had made a successful request, I change one letter of the key to make sure it was working and I knew I was on track. I finally had this method in my ToastModule.java file @ReactMethod public void getKeyChainForHost(String hostname, Callback errorCallbackContainingCorrectKeys, Callback successCallback) { try { CertificatePinner certificatePinner = new CertificatePinner.Builder() .add(hostname, "sha256/+Jg+cke8HLJNzDJB4qc1Aus14rNb6o+N3IrsZgZKXNQ=") .add(hostname, "sha256/aR6DUqN8qK4HQGhBpcDLVnkRAvOHH1behpQUU1Xl7fE=") .add(hostname, "sha256/HXXQgxueCIU5TTLHob/bPbwcKOKw6DkfsTWYHbxbqTY=") .build(); OkHttpClient client = (new OkHttpClient.Builder()).certificatePinner(certificatePinner).build(); Request request = new Request.Builder() .url("https://" + hostname) .build(); Response response =client.newCall(request).execute(); successCallback.invoke(response.body().string()); } catch (Exception e) { errorCallbackContainingCorrectKeys.invoke(e.getMessage()); } } Android Solution Extending React Native's OkHttpClient Having figured out how to send pinned http request was good, now I can use the method I created, but ideally I thought it would be best to extend the existing client, so as to immediately gain the benefit of implementing. This solution is valid as of RN0.35 and I don't know how it will fair in the future. While looking into ways of extending the OkHttpClient for RN I came across this article explaining how to add TLS 1.2 support through replacing the SSLSocketFactory. reading it I learned react uses an OkHttpClientProvider for creating the OkHttpClient instance used by the XMLHttpRequest Object and therefore if we replace that instance we would apply pinning to all the app. I added a file called OkHttpCertPin.java to my android/app/src/main/java/com/dreidev folder package com.dreidev; import android.util.Log; import com.facebook.react.modules.network.OkHttpClientProvider; import com.facebook.react.modules.network.ReactCookieJarContainer; import java.util.concurrent.TimeUnit; import okhttp3.OkHttpClient; import okhttp3.Request; import okhttp3.Response; import okhttp3.CertificatePinner; public class OkHttpCertPin { private static String hostname = "*.efghermes.com"; private static final String TAG = "OkHttpCertPin"; public static OkHttpClient extend(OkHttpClient currentClient){ try { CertificatePinner certificatePinner = new CertificatePinner.Builder() .add(hostname, "sha256/+Jg+cke8HLJNzDJB4qc1Aus14rNb6o+N3IrsZgZKXNQ=") .add(hostname, "sha256/aR6DUqN8qK4HQGhBpcDLVnkRAvOHH1behpQUU1Xl7fE=") .add(hostname, "sha256/HXXQgxueCIU5TTLHob/bPbwcKOKw6DkfsTWYHbxbqTY=") .build(); Log.d(TAG, "extending client"); return currentClient.newBuilder().certificatePinner(certificatePinner).build(); } catch (Exception e) { Log.e(TAG, e.getMessage()); } return currentClient; } } This package has a method extend which takes an existing OkHttpClient and rebuilds it adding the certificatePinner and returns the newly built instance. I then modified my MainActivity.java file following this answer's advice by adding the following methods . . . import com.facebook.react.ReactActivity; import android.os.Bundle; import com.dreidev.OkHttpCertPin; import com.facebook.react.modules.network.OkHttpClientProvider; import okhttp3.OkHttpClient; public class MainActivity extends ReactActivity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); rebuildOkHtttp(); } private void rebuildOkHtttp() { OkHttpClient currentClient = OkHttpClientProvider.getOkHttpClient(); OkHttpClient replacementClient = OkHttpCertPin.extend(currentClient); OkHttpClientProvider.replaceOkHttpClient(replacementClient); } . . . This solution was carried out in favor of completely reimplementing the OkHttpClientProvider createClient method, as inspecting the provider I realized that the master version had implemented TLS 1.2 support but was not yet an available option for me to use, and so rebuilding was found to be the best means of extending the client. I'm wondering how this approach will fair as I upgrade but for now it works well. Update It seems that starting 0.43 this trick no longer works. For timebound reasons I will freeze my project at 0.42 for now, until the reason for why rebuilding stopped working is clear. Solution IOS For IOS I had thought I would need to follow a similar method, again starting with Kudo's proposal as my lead. Inspecting the RCTNetwork module I learned that NSURLConnection was used, so instead of trying to create a completely new module with AFNetworking as suggested in the proposal I discovered TrustKit following its Getting Started Guide I simply added pod 'TrustKit' to my podfile and ran pod install the GettingStartedGuide explained how I can configure this pod from my pList.file but preferring to use code than configuration files I added the following lines to my AppDelegate.m file . . . #import <TrustKit/TrustKit.h> . . . @implementation AppDelegate - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { // Initialize TrustKit NSDictionary *trustKitConfig = @{ // Auto-swizzle NSURLSession delegates to add pinning validation kTSKSwizzleNetworkDelegates: @YES, kTSKPinnedDomains: @{ // Pin invalid SPKI hashes to *.yahoo.com to demonstrate pinning failures @"efghermes.com" : @{ kTSKEnforcePinning:@YES, kTSKIncludeSubdomains:@YES, kTSKPublicKeyAlgorithms : @[kTSKAlgorithmRsa2048], // Wrong SPKI hashes to demonstrate pinning failure kTSKPublicKeyHashes : @[ @"+Jg+cke8HLJNzDJB4qc1Aus14rNb6o+N3IrsZgZKXNQ=", @"aR6DUqN8qK4HQGhBpcDLVnkRAvOHH1behpQUU1Xl7fE=", @"HXXQgxueCIU5TTLHob/bPbwcKOKw6DkfsTWYHbxbqTY=" ], // Send reports for pinning failures // Email [email protected] if you need a free dashboard to see your App's reports kTSKReportUris: @[@"https://overmind.datatheorem.com/trustkit/report"] }, } }; [TrustKit initializeWithConfiguration:trustKitConfig]; . . . I got the public key hashes from my android implementation and it just worked (the version of TrustKit I received in my pods is 1.3.2) I was glad IOS turned out to be a breath As a side note TrustKit warned that it's Auto-swizzle won't work if the NSURLSession and Connection are already swizzled. that said it seems to be working well so far. Conclusion This answer presents the solution for both Android and IOS, given I was able to implement this in native code. One possible improvement may be to implement a common platform module where setting public keys and configuring the Network providers of both android and IOS can be managed in javascript. Kudo's proposal mentioned simply adding the public keys to the js bundle may however expose a vulnerability, where somehow the bundle file can be replaced. I don't know how that attack vector can function, but certainly the extra step of signing the bundle.js as proposed may protect the js bundle. Another approach may be to simply encode the js bundle into a 64 bit string and include it in the native code directly as mentioned in this issue's conversation. This approach has the benefit of obfuscating as well hardwiring the js bundle into the app, making it inaccessible for attackers or so I think. If you read this far I hope I enlightened you on your quest for fixing your bug and wish you enjoy a sunny day.
Set CloseStream to false, otherwise the output stream will be closed when you call pCopy.Close(). pCopy = new PdfSmartCopy(doc, msOutput) { CloseStream = false }; Explanation I found no documentation over the Internet so I had to look at the source code directly. Here's the declaration of the PdfSmartCopy class: public class PdfSmartCopy : PdfCopy { // ... public PdfSmartCopy(Document document, Stream os) : base(document, os) { // ... } // ... } Here is the declaration of the PdfCopy class: public class PdfCopy : PdfWriter { // ... public PdfCopy(Document document, Stream os) : base(new PdfDocument(), os) { // ... } // ... } The declaration of the PdfWriter class: public class PdfWriter : DocWriter, IPdfViewerPreferences, IPdfEncryptionSettings, IPdfVersion, IPdfDocumentActions, IPdfPageActions, IPdfIsoConformance, IPdfRunDirection, IPdfAnnotations { // ... protected PdfWriter(PdfDocument document, Stream os) : base(document, os) { // ... } // ... } And finally, the declaration of DocWriter class: public abstract class DocWriter : IDocListener { // ... // default value is true protected bool closeStream = true; public virtual bool CloseStream { get { return closeStream; } set { closeStream = value; } } protected DocWriter(Document document, Stream os) { this.document = document; this.os = new OutputStreamCounter(os); } public virtual void Close() { open = false; os.Flush(); if (closeStream) // <-- Take a look at this line os.Close(); } // ... }
It's clearly saying that the validate() method is throwing an exception. If you want it to not to throw an exception, update your App\Exceptions\Handler class and add ValidationException to the $dontReport array: class Handler extends ExceptionHandler { /** * A list of the exception types that should not be reported. * * @var array */ protected $dontReport = [ \Illuminate\Auth\AuthenticationException::class, \Illuminate\Auth\Access\AuthorizationException::class, \Symfony\Component\HttpKernel\Exception\HttpException::class, \Illuminate\Database\Eloquent\ModelNotFoundException::class, \Illuminate\Session\TokenMismatchException::class, \Illuminate\Validation\ValidationException::class, // <= Here ]; Read more: Documentation how to migrate from Laravel 5.1 to 5.2 Or you can leave it as it is, and handle the exception in a new catch block: public function create(Request $request) { try { $this->validate($request, [ 'name' => 'required', 'email' => 'required', 'password' => 'required' ]); $this->user->create($request); } catch (ValidationException $e) { return response()->json([ 'success' => false, 'message' => 'There were validation errors.', ], 400); } catch (Exception $e) { return response()->json([ 'success' => false, 'message' => 'Something went wrong, please try again later.' ], 400); } return response()->json([ 'success' => true, 'message' => 'User successfully saved!' ], 201); }