text
stringlengths
64
2.99M
It has nothing to do with whether the file has been decrypted or not when using the method getNumPages(). If we take a look at the source code of getNumPages(): def getNumPages(self): """ Calculates the number of pages in this PDF file. :return: number of pages :rtype: int :raises PdfReadError: if file is encrypted and restrictions prevent this action. """ # Flattened pages will not work on an Encrypted PDF; # the PDF file's page count is used in this case. Otherwise, # the original method (flattened page count) is used. if self.isEncrypted: try: self._override_encryption = True self.decrypt('') return self.trailer["/Root"]["/Pages"]["/Count"] except: raise utils.PdfReadError("File has not been decrypted") finally: self._override_encryption = False else: if self.flattenedPages == None: self._flatten() return len(self.flattenedPages) we will notice that it is the self.isEncrypted property controlling the flow. And as we all know the isEncrypted property is read-only and not changeable even when the pdf is decrypted. So, the easy way to handle the situation is just add the password as key-word argument with empty string as default value and pass your password when using the getNumPages() method and any other method build beyond it
Subject#doAs vs Subject#doAsPrivileged The default authorization algorithm employed by the AccessController is based on Permission intersection: If all of an AccessControlContext's ProtectionDomains, potentially combined with a Subject's Principals, have, statically and/or as per the Policy in effect, the Permission being checked, the evaluation succeeds; otherwise it fails. Subject#doAs does not work in your case because your Permission is granted to the combination of your ProtectionDomain and your Principal, but not to the domain itself. Specifically, at the time of the AccessController#checkPermission(customPermission) invocation, the effective AccessControlContext included following relevant (as far as Permission evaluation is concerned) frames: Frame # | ProtectionDomain | Permissions --------+---------------------------+--------------------------------------------- 2 | "file:./bin/" | { CustomPermission("someMethod"), | + CustomPrincipal("user") | permissions statically assigned by default | | by the ClassLoader } --------+---------------------------+--------------------------------------------- 1 | "file:./bin/" | { AuthPermission( | | "createLoginContext.MyLoginModule"), | | AuthPermission("doAs"), default as above } --------+---------------------------+--------------------------------------------- The intersection of those frames' permissions does of course not include the desired CustomPermission. Subject#doAsPrivileged, when given a null AccessControlContext, on the other hand, does the trick, because it "trims" the effective context's stack to its top-most frame, i.e., the one from which doAsPrivileged gets invoked. What actually happens is that the null (blank) context gets treated by the AccessController as if it were a context whose permission evaluation yields AllPermission; in other words: AllPermission ⋂ permissionsframe2 = { CustomPermission("someMethod"), default ones } , which is (save for the minimal set of seemingly extraneous statically-assigned Permnissions) the desired outcome. Of course, in cases where such potentially arbitrary privilege escalation is undesired, a custom context, whose encapsulated domains' permissions express the maximum set of privileges you are willing to grant (to e.g. some Subject), can be passed to doAsPrivileged instead of the null one. Why is the Principal implementation forced to override equals and hashCode? The following stack trace snippet illustrates why: at java.lang.Thread.dumpStack(Thread.java:1329) at com.foo.bar.PrincipalImpl.equals(PrincipalImpl.java:53) at javax.security.auth.Subject$SecureSet.contains(Subject.java:1201) at java.util.Collections$SynchronizedCollection.contains(Collections.java:2021) at java.security.Principal.implies(Principal.java:92) at sun.security.provider.PolicyFile.addPermissions(PolicyFile.java:1374) at sun.security.provider.PolicyFile.getPermissions(PolicyFile.java:1228) at sun.security.provider.PolicyFile.getPermissions(PolicyFile.java:1191) at sun.security.provider.PolicyFile.getPermissions(PolicyFile.java:1132) at sun.security.provider.PolicyFile.implies(PolicyFile.java:1086) at java.security.ProtectionDomain.implies(ProtectionDomain.java:281) at java.security.AccessControlContext.checkPermission(AccessControlContext.java:450) at java.security.AccessController.checkPermission(AccessController.java:884) at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) ... Further reading: Default Policy Implementation and Policy File Syntax Secure Coding Guidelines for Java SE - §9 - Access Control Troubleshooting Security
In order for you to get a non null user instance from SessionUtils.getAuth0User(req) some piece of code must first call SessionUtils.setAuth0User. This should be done when you receive confirmation that the user authenticated with success. In the auth0-servlet-sample you were using as reference this is done by configuring an Auth0ServletCallback that will handle requests performed to /callback endpoint. Since the Auth0ServletCallback calls (see code below) the set user for you, in the servlet example you can then get the user with success. protected void store(final Tokens tokens, final Auth0User user, final HttpServletRequest req) { SessionUtils.setTokens(req, tokens); SessionUtils.setAuth0User(req, user); } At the moment the available samples (auth0-servlet-sample, auth0-servlet-sso-sample, auth0-spring-mvc-sample, auth0-spring-security-api-sample and auth0-spring-security-mvc-sample) don't include one for spark-java so I can't refer you to any sample. In order to solve this you have to include additional logic to process the result of the authentication operation in your spark-java application and in case of success call the SessionUtils.setAuth0User yourself if you then want to use the corresponding SessionUtils.getAuth0User method. For general guidance on integrating a web application with Auth0 check Integrating a Web App with Auth0.
Figure out how the API of the Kayako system looks like. In WordPress you can do something similar like this in order to authenticate the users: // this action is executed just before the invocation of the WordPress authentication process add_action('wp_authenticate','checkTheUserAuthentication'); function checkTheUserAuthentication() { $username=$_POST['log']; $password=$_POST['pwd']; // try to log into the external service or database with username and password $ext_auth = try2AuthenticateExternalService($username,$password); // if external authentication was successful if($ext_auth) { // find a way to get the user id $user_id = username_exists($username); // userdata will contain all information about the user $userdata = get_userdata($user_id); $user = set_current_user($user_id,$username); // this will actually make the user authenticated as soon as the cookie is in the browser wp_set_auth_cookie($user_id); // the wp_login action is used by a lot of plugins, just decide if you need it do_action('wp_login',$userdata->ID); // you can redirect the authenticated user to the "logged-in-page", define('MY_PROFILE_PAGE',1); f.e. first header("Location:".get_page_link(MY_PROFILE_PAGE)); } } The try2AuthenticateExternalService() method should contain some curl-request (or similar) to the remote service.
It sounds like you should be creating stateful/configurable Module instances and then generating separate Components or Subcomponents for each ClientSDK you build. public class ClientSDK { @Inject SDKConfiguration configuration; @Inject LibraryAuthenticationManager authManager; // ... public static class Builder { // ... public ClientSDK build() { return DaggerClientSDKComponent.builder() .configurationModule(new ConfigurationModule( apiKey, customAuthManager, baseApiUrl) .build() .getClientSdk(); } } } ...where your ConfigurationModule is a @Module you write that takes all of those configuration parameters and makes them accessible through properly-qualified @Provides methods, your ClientSDKComponent is a @Component you define that refers to the ConfigurationModule (among others) and defines a @Component.Builder inner interface. The Builder is important because you're telling Dagger it can no longer use its modules statically, or through instances it creates itself: You have to call a constructor or otherwise procure an instance, which the Component can then consume to provide instances. Dagger won't get into the business of saving your named singletons, but it doesn't need to: you can save them yourself in a static Map, or save the ClientSDKComponent instance as an entry point. For that matter, if you're comfortable letting go of some of the control of ClientSDK, you could even make ClientSDK itself the Component; however, I'd advise against it, because you'll have less control of the static methods you want, and will lose the opportunity to write arbitrary methods or throw exceptions as needed. You don't have to worry yourself about scopes, unless you want to: Dagger 2 tracks scope lifetime via component instance lifetime, so scopes are very easy to add for clarity but are not strictly necessary if you're comfortable with "unscoped" objects. If you have an object graph of true singleton objects, you can also store that component as a conventional (static final field) singleton and generate your ClientSDKComponent as a subcomponent of that longer-lived component. If it's important to your build dependency graph, you can also phrase it the other way, and have your ClientSDKComponent as a standalone component that depends on another @Component.
The error is due to missing dependencies. Verify that you have these jar files in the spark home directory: spark-redshift_2.10-3.0.0-preview1.jar RedshiftJDBC41-1.1.10.1010.jar hadoop-aws-2.7.1.jar aws-java-sdk-1.7.4.jar (aws-java-sdk-s3-1.11.60.jar) (newer version but not everything worked with it) Put these jar files in $SPARK_HOME/jars/ and then start spark pyspark --jars $SPARK_HOME/jars/spark-redshift_2.10-3.0.0-preview1.jar,$SPARK_HOME/jars/RedshiftJDBC41-1.1.10.1010.jar,$SPARK_HOME/jars/hadoop-aws-2.7.1.jar,$SPARK_HOME/jars/aws-java-sdk-s3-1.11.60.jar,$SPARK_HOME/jars/aws-java-sdk-1.7.4.jar (SPARK_HOME should be = "/usr/local/Cellar/apache-spark/$SPARK_VERSION/libexec") This will run Spark with all necessary dependencies. Note that you also need to specify the authentication type 'forward_spark_s3_credentials'=True if you are using awsAccessKeys. from pyspark.sql import SQLContext from pyspark import SparkContext sc = SparkContext(appName="Connect Spark with Redshift") sql_context = SQLContext(sc) sc._jsc.hadoopConfiguration().set("fs.s3n.awsAccessKeyId", <ACCESSID>) sc._jsc.hadoopConfiguration().set("fs.s3n.awsSecretAccessKey", <ACCESSKEY>) df = sql_context.read \ .format("com.databricks.spark.redshift") \ .option("url", "jdbc:redshift://example.coyf2i236wts.eu-central- 1.redshift.amazonaws.com:5439/agcdb?user=user&password=pwd") \ .option("dbtable", "table_name") \ .option('forward_spark_s3_credentials',True) \ .option("tempdir", "s3n://bucket") \ .load() Common errors afterwards are: Redshift Connection Error: "SSL off" Solution: .option("url", "jdbc:redshift://example.coyf2i236wts.eu-central- 1.redshift.amazonaws.com:5439/agcdb?user=user&password=pwd?ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory") S3 Error: When unloading the data, e.g. after df.show() you get the message: "The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint." Solution: The bucket & cluster must be run within the same region
From your description I'm going to summarize your problem as follows: You've configured node.js to accept TLS connections You want Apache to accept TLS connections OK. So first question. You said: I made https server in Node, and on https://localhost:8082 everything worked as it should. Without changing anything, what happens when you try to access http://localhost:8082? I can tell you what happens (no guesses, no "should", I can definitely tell you) but you should at least try it in order to see for yourself what happens. SPOILERS: HTTP does not support listening for both TLS and unencrypted connections on a single port. It's just the way the protocol was specified. Other protocols like POP3, SMTP, FTP etc have the capability to do this but not HTTP. So what happens is trying to access http://localhost:8082 will fail. Now, look carefully at your Apache config: # THIS is the problematic part: ProxyPass http://localhost:8082/ ProxyPassReverse http://localhost:8082/ # ^ # |______ notice this? So the problem is you are proxying to a url that does not work. You have two options here. Both of them valid depending on how you want to design your architecture. Proxy to https instead of http. ProxyPass https://localhost:8082/ ProxyPassReverse https://localhost:8082/ The advantage of this is that you get end-to-end encryption. Even if someone manage to login to your server they can't listen in to the connection. The disadvantage of this is you're encrypting twice which mean you're using much more CPU time to serve a request. Remove TLS from node.js. The advantage of this is that you're letting Apache handle all encryption so your web app does not need to spend any CPU time handling encryption itself. For very large services like Google or Facebook they can even offload the encryption to another front-end server (or more likely servers) so that the server running your web app won't be busy encrypting and decrypting HTTP connections. The disadvantage is that anyone who can login to your server can easily listen in to the connection. There is a third way that some people do but it's getting less and less popular. Run node.js listening on two ports. For example, configure one to listen to 8082 for http and also listen to 8084 for https then you can proxy http pages to 8082 and https pages to 8084. However, looking at your Apache config I can tell this is not what you want to do.
The superuser has all the set of permissions granted. Therefore you are able to see all the permissions. But when a new user is created he will not have any of the permissions set therefore there is no relation between the user and permissions so you are getting the above error. Note:- You can check for the available permissions for the logged in user inside template by using {{ perms }} For a specific app:- {{ perms.app_name }} For a specific model:- {{ perms.app_name.model_name }} Suppose you want to grant access to a user with specific permission to a particular model for a view you can use the permission required decorator like this:- from django.contrib.auth.decorators import permission_required @permission_required('polls.can_vote') def my_view(request): ... Now here the user with the permission can_vote on "polls" will be allowed the access grant. For further detailed use you can refer:- Django documentation on permissions. The authentication back-end is responsible for user permissions. I guess you are using your own custom authentication back-end. However if you are doing so you may have forgot to import ModelBackend. from django.contrib.auth.backends import ModelBackend Now make sure to extend this back-end into your own custom back-end class EmailBackend(ModelBackend):
This is a pretty good explanation of the issue you are seeing from Third Party Application Fails Using LDAP over SSL: The following issue is one that I have seen come up from time to time and can be a challenge for IT administrators who are trying to use the built in Version 2 Domain Controller Authentication template in their environment. The concern may be seen when folks used a version 1 certificate in the past but the newer one (version 2) seems to give some unexpected results. So what’s the problem? Well, if you have a third party application which uses LDAP over SSL to connect to the domain controller it may not work initially using the new version 2 Domain controller Authentication certificate. So let’s go over the issue in detail. A 3rd party application was making LDAP over SSL connections to the Domain Controllers as part of what it does intentionally. This was working when the domain controller had a certificate based on the “old style” version 1 Domain Controller template. An Enterprise Certification Authority had issued the certificate. However, the “Domain Controller” certificates have been superseded by certificates based on the “Domain Controller Authentication” certificates which can happen for several reasons that we won’t go into great detail on in this blog post today. The end result which is seen is that the 3rd party application now fails. What is the apparent problem? By default, the “Domain Controller Authentication” certificate has a blank subject field and the Subject Alternate Name (SAN) field is marked critical on the “Domain Controller Authentication” certificate. Simply put, some applications cannot use a certificate if the SAN field being marked critical. Why is this field important? Some applications may have difficulty using the certificate if the SAN field is marked critical and the subject field is blank because of how these fields are checked when deciding whether to use a certificate. Assuming this is Active Directory anyway. But it would probably be valid elsewhere too. Long story short, the default DC Auth template for the LDAP SSL certificates omits the subject name entirely in favor of filling in the subject alternative names and marking it as critical. However, I know for a fact that this can lead to issues when using the OpenLDAP/OpenSSL libraries when trying to connect over TLS/SSL. If you're using OpenLDAP you can use ldap_set_option with the LDAP_OPT_DEBUG_LEVEL constant and set the value to 7. Then it should tell you exactly what it is tripping over with regards to the certificate. You could either have them re-issue a new certificate that actually fills in the subject name or (if using OpenLDAP for the library pieces) you could change the TLS_REQCERT option to allow or none (which would unfortunately raise some security concerns...).
From the perspective of JWT the approach would be fine as claim values can be any JSON type so numbers are fine. For JWTs, while claim names are strings, claim values can be any JSON type. (source: JSON Web Token (JWT) However, if you have a requirement to stay compliant with OAuth2 then your proposal would not be acceptable. Staying compliant may be beneficial if you want to start with your own authorization server but want to keep your options open and easily switch to either a third-party authorization server hosted by you or a cloud authentication provider like Auth0. (Disclosure: I work at Auth0) If I were you I would stay OAuth2 compliant even if I had no plans on switching implementations. It should be easy to implement a transformation from multiple string values to the integer used to represent the permissions. In the general scenario this simple transformation would never become the performance bottleneck of your application, but if you do have very specific performance requirements you can always include your integer in the JWT as a string: { scope: "7" } This way you'll be OAuth2 compliant and can use just a simple parsing operation to convert to a value usable for your bitwise comparisons.
As it's normally the case in software development you have a couple of options depending on requirements. The mandatory requirement is that your client (desktop) application needs to send something to your REST API so that the API can perform up to two decisions: Decide who the user is. Decide if the user is authorized to perform the currently requested action. The second step may not be applicable if all authenticated users have access to exactly the same set of actions so I'll cover both scenarios. Also note that, for the first step, sending the Google user ID is not a valid option as that information can be obtained by other parties and does not ensure that the user did authenticate to use your application. Option 1 - Authentication without fine-grained authorization Either always sending the id_token or exchanging that token with your custom session identifier both meet the previous requirement, because the id_token contains an audience that clearly indicates the user authenticated to use your application and the session identifier is generated by your application so it can also ensure that. The requests to your API need to use HTTPS, otherwise it will be too easy for the token or session ID to be captured by an attacker. If you go with the id_token alternative you need to take in consideration that the token will expire; for this, a few options again: repeat the authentication process another time; if the user still has a session it will indeed be quicker, but you still have to open a browser, local server and repeat the whole steps. request offline_access when doing the first authentication. With the last option you should get a refresh token that would allow for your application to have a way to identify the user even after the first id_token expires. I say should, because Google seems to do things a bit different than the specification, for example, the way to obtain the refresh token is by providing access_type=offline instead of the offline_access from OpenID Connect. Personally, I would go with the session identifier as you'll have more control over lifetime and it may also be simpler. Option 2 - Authentication + fine-grained authorization If you need a fine-grained authorization system for your REST API then the best approach would be to authenticate your users with Google, but then have an OAuth 2.0 compliant authorization server that would issue access tokens specific for your API. For the authorization server implementation, you could either: Implement it yourself or leverage open source components    ⤷ may be time consuming, complex and mitigation of security risks would all fall on you Use a third-party OAuth 2.0 as a servive authorization provider like Auth0    ⤷ easy to get started, depending on amount of usage (the free plan on Auth0 goes up to 7000 users) it will cost you money instead of time Disclosure: I work at Auth0.
You need to realize that you cannot prevent users from accessing your unprotected files on an open platform like the PC. That is what DRM tries to achieve for decades now, however this goal is unachievable by definition. The only thing you can do is to make it harder / more cumbersome to access the unprotected files, however in the end, if someone decides to put enough effort into circumventing your protection, she or he will always succeed. For instance, you may obfuscate your source files (by dedicated obfuscators or simply by minimizing them), you can use some non-standard file encoding (reverse of base64) or you may use some kind of encryption method. Because you need to ship your key as well, any encryption method will do, no matter how secure or insecure it is. Finally, as others have already mentioned, the crypto primitives are located in the System.Security.Cryptography namespace. Note however that for security sensitive systems I would not recommend to use them directly, because there are many nuances and getting it right is actually quite hard. You should have a look at libraries like SecurityDriven.Inferno, which wrap the crypto primitives with secure defaults.
and I need the name of the user that logged in, can anyone help me how should I go about this For UWP app, this is impossible using official managed API. See MobileServiceAuthentication class in here internal async Task<MobileServiceUser> LoginAsync() { string response = await this.LoginAsyncOverride(); if (!string.IsNullOrEmpty(response)) { JToken authToken = JToken.Parse(response); // Get the Mobile Services auth token and user data this.Client.CurrentUser = new MobileServiceUser((string)authToken["user"]["userId"]); this.Client.CurrentUser.MobileServiceAuthenticationToken = (string)authToken[LoginAsyncAuthenticationTokenKey]; } return this.Client.CurrentUser; } The official sdk just retrieves the userId and MobileServiceAuthenticationToken, for other platform, we need to use GetIdentitiesAsync() method to get identity, see How to get user name, email, etc. from MobileServiceUser? or LINK The username info actually has been retrieved in the SSO process: So you have to implement the auth process(Extend the method based on the open source code) and maintain the username information as you need. If you can get the user's input, maybe you can also call Live API: https://msdn.microsoft.com/en-us/library/office/dn659736.aspx#Requesting_info
We are also not relying on sails-permissions. In our app, users can be members of multiple orgs. We are using auth0 for all authentication activities, i.e. every request must include a jwt that is included in the request header. The jwt includes userId, orgId and role. Sails policies decode the jwt and attach userId, orgId and role the the req object for all later checks. Every model has the property orgId - we are using MongoDB. Every controller, db operation, etc. adds this verified orgId to the query. Actually we have a small pipeline preparing the query: we add the orgId, in update cases we filter out unwanted property updates, etc. This approach does not require additional db calls for separation of tenants. Some models have specific access requirements per individual RECORD. Here we store allowedUser properties (one for read, one for update, etc.) on exactly this record and we extend the query once more so that only records are returned or updated or Xyz where the current user is included in the applicable allowedUsers property. This approach also does not require additional db calls. This leverages MongoDB-specific query features, though. We currently do not have ACL-like requirements which would be right between the 2 approaches I described above (re access control granularity).
Finally I found a solution for this problem. Here I will share my solution for others who are having the same problem. Add this class to your asp.net core application: using System; using Microsoft.Extensions.Localization; namespace App.Utilities { public static class StringLocalizerFactoryExtensions { public static IStringLocalizer CreateConventional<T>(this IStringLocalizerFactory factory) { return factory.CreateConventional(typeof(T)); } public static IStringLocalizer CreateConventional(this IStringLocalizerFactory factory, Type type) { if (type.Module.ScopeName != "CommonLanguageRuntimeLibrary") { string[] parts = type.FullName.Split(new[] { type.Assembly.FullName.Split(',')[0] }, StringSplitOptions.None); string name = parts[parts.Length - 1].Trim('.'); return factory.CreateConventional(name); } else { return factory.Create(type); } } public static IStringLocalizer CreateConventional(this IStringLocalizerFactory factory, string resourceName) { return factory.Create(resourceName, null); } public static IStringLocalizer CreateDataAnnotation(this IStringLocalizerFactory factory) { if (type.Module.ScopeName != "CommonLanguageRuntimeLibrary") { return factory.Create("DataAnnotation.Localization", "App_LocalResources"); } else { return factory.Create(type); } } } } ... and in your Startup.cs file replace the following part: services.AddLocalization(options => options.ResourcesPath = "Resources"); services.AddMvc() .AddViewLocalization(LanguageViewLocationExpanderFormat.Suffix) .AddDataAnnotationsLocalization(); ... with this code: services.AddLocalization(options => options.ResourcesPath = "Resources"); services.AddMvc() .AddViewLocalization(LanguageViewLocationExpanderFormat.Suffix) //The following part includes the change: .AddDataAnnotationsLocalization(options => options.DataAnnotationLocalizerProvider = (type, factory) => factory.CreateConventional(type)); The code is treating your view model localization resources like the one used for views or any other place where the default IStringLocalizerFactory can be used. Therefore, no more DataAnnotation.Localization.de-DE.resx resources and App_LocalResources folder are needed. Just, create a series of resource files with the conventional naming (Models.AccountViewModels.RegisterViewModel.en-US.resx or Models/AccountViewModels/RegisterViewModel.sv-SE.resx in the Resources folder which is set by calling services.AddLocalization(options => options.ResourcesPath = "Resources")) and you are ready to go. The TagHelpers and HtmlHelpers will start working and translating the error messages. Also, this will work for DisplayAttribute.Name out of the box. (v1.1.0-preview1-final + .net v4.6.2) Update 1: Here is my project.json: { "userSecretsId": "...", "dependencies": { "Microsoft.NETCore.Platforms": "1.1.0-preview1-*", "Microsoft.AspNetCore.Authentication.Cookies": "1.1.0-preview1-final", "Microsoft.AspNetCore.Diagnostics": "1.1.0-preview1-final", "Microsoft.AspNetCore.DataProtection": "1.1.0-preview1-final", "Microsoft.AspNetCore.Diagnostics.EntityFrameworkCore": "1.1.0-preview1-final", "Microsoft.AspNetCore.Identity.EntityFrameworkCore": "1.1.0-preview1-final", "Microsoft.AspNetCore.Mvc": "1.1.0-preview1-final", "Microsoft.AspNetCore.Razor.Tools": "1.0.0-preview3-final", "Microsoft.ApplicationInsights.AspNetCore": "1.0.2", "Microsoft.AspNetCore.Mvc.Localization": "1.1.0-preview1-final", "Microsoft.AspNetCore.Mvc.Razor": "1.1.0-preview1-final", "Microsoft.AspNetCore.Mvc.TagHelpers": "1.1.0-preview1-final", "Microsoft.AspNetCore.Mvc.DataAnnotations": "1.1.0-preview1-final", "Microsoft.Extensions.Configuration.CommandLine": "1.1.0-preview1-final", "Microsoft.Extensions.Configuration.FileExtensions": "1.1.0-preview1-final", "Microsoft.AspNet.WebApi.Client": "5.2.3", "Microsoft.AspNetCore.Routing": "1.1.0-preview1-final", "Microsoft.AspNetCore.Server.IISIntegration": "1.1.0-preview1-final", "Microsoft.AspNetCore.Server.Kestrel": "1.1.0-preview1-final", "Microsoft.AspNetCore.StaticFiles": "1.1.0-preview1-final", "Microsoft.EntityFrameworkCore": "1.1.0-preview1-final", "Microsoft.EntityFrameworkCore.SqlServer.Design": "1.1.0-preview1-final", "Microsoft.Extensions.Configuration.EnvironmentVariables": "1.1.0-preview1-final", "Microsoft.Extensions.Configuration.Json": "1.1.0-preview1-final", "Microsoft.Extensions.Configuration.UserSecrets": "1.1.0-preview1-final", "Microsoft.Extensions.Logging": "1.1.0-preview1-final", "Microsoft.Extensions.Logging.Console": "1.1.0-preview1-final", "Microsoft.Extensions.Logging.Debug": "1.1.0-preview1-final", "Microsoft.Extensions.Options.ConfigurationExtensions": "1.1.0-preview1-final", "Microsoft.VisualStudio.Web.BrowserLink.Loader": "14.0.0", "Microsoft.VisualStudio.Web.CodeGeneration.Tools": "1.0.0-preview3-final", "Microsoft.VisualStudio.Web.CodeGenerators.Mvc": "1.0.0-preview3-final", "Microsoft.AspNetCore.Hosting": "1.1.0-preview1-final", "Microsoft.AspNetCore.Hosting.WindowsServices": "1.1.0-preview1-final", "Loggr.Extensions.Logging": "1.0.0", "Microsoft.EntityFrameworkCore.SqlServer": "1.1.0-preview1-final", "Microsoft.EntityFrameworkCore.Tools": "1.0.0-preview3-final", "BundlerMinifier.Core": "2.2.296" }, "tools": { "Microsoft.AspNetCore.Razor.Tools": "1.0.0-preview3-final", "Microsoft.AspNetCore.Server.IISIntegration.Tools": "1.0.0-preview3-final", "Microsoft.EntityFrameworkCore.Tools.DotNet": "1.0.0-preview3-final", "Microsoft.Extensions.SecretManager.Tools": "1.0.0-preview3-final", "Microsoft.VisualStudio.Web.CodeGeneration.Tools": { "version": "1.0.0-preview3-final", "imports": [ "portable-net45+win8" ] } }, "frameworks": { "net462": {} }, "buildOptions": { "emitEntryPoint": true, "preserveCompilationContext": true }, "runtimeOptions": { "configProperties": { "System.GC.Server": true } }, "publishOptions": { "include": [ "wwwroot", "**/*.cshtml", "appsettings.json", "web.config" ] }, "scripts": { "prepublish": [ "bower install", "dotnet bundle" ], "postpublish": [ "dotnet publish-iis --publish-folder %publish:OutputPath% --framework %publish:FullTargetFramework%" ] } } Update 2: In case someone wants the code to just work as promised with the DataAnnotation.Localization in the App_LocalResourses folder, I have updated the StringLocalizerFactoryExtensions code. Use the updated class and the following code in Startup.cs class instead and it should work. services.AddLocalization(options => options.ResourcesPath = "Resources"); services.AddMvc() .AddViewLocalization(LanguageViewLocationExpanderFormat.Suffix) //The following part includes the change: .AddDataAnnotationsLocalization(options => options.DataAnnotationLocalizerProvider = (type, factory) => factory.CreateDataAnnotation());
You only need to place one keytab on the application server to successfully do Kerberos SSO authentication, not multiple ones. When users access a service which is Kerberos-enabled, they obtain a Kerberos ticket for that service from the KDC. The keytab on the application server decrypts the contents of that ticket, because inside the keytab is a representation of the service running on the application server users want to access, the FQDN of the application server, and the Kerberos realm name which will honor the authentication attempt, and cryptographic hash of the service principal in the KDC. As the passwords in each are the same, authentication succeeds. This is a very implied explanation. The keytab won't be able to determine users group membership however. That is part is authorization, so you'll need to make an LDAP authorization call back to the Directory server if you want to parse group membership. There's only one exception to this rule that I know of. In a homogenous Microsoft-only Active Directory environment, in which Kerberos is the primary authentication method (it is by default), keytabs are not used. Microsoft application servers can, without a keytab, natively decrypt the Kerberos ticket to determine who the user is and parse that same ticket for user's group information as well, without any need for LDAP calls back to the Directory server. Parsing the Kerberos service ticket for group information is known as reading the PAC. In an AD environment however, non-Microsoft platforms cannot "read the PAC" for group membership, as Microsoft has never exposed how they do this as far as I am aware. See http://searchwindowsserver.techtarget.com/feature/Advanced-Kerberos-topics-From-authentication-to-authorization.
We found this blog entry on Cloud Identity to be really helpful to get started with something similar. We are using Web API so it's not exactly the same. You will need to add this to your Startup.Auth.cs file: app.UseActiveDirectoryFederationServicesBearerAuthentication( new ActiveDirectoryFederationServicesBearerAuthenticationOptions { Audience = ConfigurationManager.AppSettings["ida:Audience"], MetadataEndpoint = ConfigurationManager.AppSettings["ida:MetadataEndpoint"] }); In your web.config you will need keys to point to those entries: <add key="ida:AdfsMetadataEndpoint" value="https://adfs.yourdomain.com/federationmetadata/2007-06/federationmetadata.xml" /> <add key="ida:Audience" value="https://yourmvc.yourdomain.com" /> Note that what version of ADFS you are using makes a big difference. We found that while trying to get tokens to work with version 3.0 of ADFS they are somewhat broken at the moment. On premises ADFS will also work much differently than Azure. We needed to customize the claims for our implementation and this post helped immensely. Startup.Auth.cs will look similar to this: app.UseWindowsAzureActiveDirectoryBearerAuthentication( new WindowsAzureActiveDirectoryBearerAuthenticationOptions { Audience = ConfigurationManager.AppSettings["ida:Audience"], Tenant = ConfigurationManager.AppSettings["ida:Tenant"], Provider = new OAuthBearerAuthenticationProvider() { OnValidateIdentity = async context => { context.Ticket.Identity.AddClaim( new Claim(http://mycustomclaims/hairlenght, RetrieveHairLenght(userID), ClaimValueTypes.Double, "LOCAL AUTHORITY");)); } } });
As well as the web servers in front of Play, which it sounds like you have configured, Play itself has max request Content-Length limits, documented here: https://www.playframework.com/documentation/2.5.x/JavaBodyParsers#Content-length-limits Most of the built in body parsers buffer the body in memory, and some buffer it on disk. If the buffering was unbounded, this would open up a potential vulnerability to malicious or careless use of the application. For this reason, Play has two configured buffer limits, one for in memory buffering, and one for disk buffering. The memory buffer limit is configured using play.http.parser.maxMemoryBuffer, and defaults to 100KB, while the disk buffer limit is configured using play.http.parser.maxDiskBuffer, and defaults to 10MB. These can both be configured in application.conf, for example, to increase the memory buffer limit to 256KB: Depending on the situation, you may want to be careful with increasing this limit too much -- if you have untrusted clients they may be able to overload your server by sending lots of very large requests in a short space of time. This may cause your server to crash with an OutOfMemoryError, leading to a denial of service attack.
Firstly, remove dispatch_semaphore related code from your function. func getAuthentication(username: String, password: String){ let baseURL = "Some URL here" let url = NSURL(string: baseURL)! let request = NSMutableURLRequest(URL: url) request.HTTPMethod = "POST" request.HTTPBody = "{\n \"username\": \"\(username)\",\n \"password\": \"\(password)\"\n}".dataUsingEncoding(NSUTF8StringEncoding); let session = NSURLSession.sharedSession() let task = session.dataTaskWithRequest(request) { (data, response, error) -> Void in if error == nil{ let swiftyJSON = JSON(data: data!) print(swiftyJSON) //parse the data to get the user self.id = swiftyJSON["id"].intValue self.token = swiftyJSON["meta"]["token"].stringValue } else { print("There was an error") } } task.resume() } In the above code, the function dataTaskWithRequest itself is an asynchronus function. So, you don't need to call the function getAuthentication in a background thread. For adding the completion handler, func getAuthentication(username: String, password: String, completion:((sucess: Bool) -> Void)){ let baseURL = "Some URL here" let url = NSURL(string: baseURL)! let request = NSMutableURLRequest(URL: url) request.HTTPMethod = "POST" request.HTTPBody = "{\n \"username\": \"\(username)\",\n \"password\": \"\(password)\"\n}".dataUsingEncoding(NSUTF8StringEncoding); let session = NSURLSession.sharedSession() let task = session.dataTaskWithRequest(request) { (data, response, error) -> Void in var successVal: Bool = true if error == nil{ let swiftyJSON = JSON(data: data!) print(swiftyJSON) self.id = swiftyJSON["id"].intValue self.token = swiftyJSON["meta"]["token"].stringValue } else { print("There was an error") successVal = false } dispatch_async(dispatch_get_main_queue(), { () -> Void in completion(successVal) }) } task.resume() } It can be called as follows: self.getAuthentication("user", password: "password", completion: {(success) -> Void in })
I don't know Heroku + Rails but believe I can answer some of the more generic questions. From the client's perspective, the setup/teardown of any connection is very expensive. The concept of connection pooling is to have a set of connections which are kept alive and can be used for some period of time. The JDK HttpUrlConnection does the same (assuming HTTP 1.1) so that - assuming you're going to the same server - the HTTP connection stays open, waiting for the next expected request. Same thing applies here - instead of closing a JDBC connection each time, the connection is maintained - assuming same server and authentication credentials - so the next request skips the unnecessary work and can immediately move forward in sending work to the database server. There are many ways to maintain a client-side pool of connections, it may be part of the JDBC driver itself, you might need to implement pooling using something like Apache Commons Pooling, but whatever you do it's going to increase your behavior and reduce errors that might be caused by network hiccups that could prevent your client from connecting to the server. Server-side, most database providers are configured with a pool of n possible connections that the database server may accept. Usually each additional connection has a footprint - usually quite small - so based on the memory available you can figure out the maximum number of available connections. In most cases, you're going to want to have larger-than-expected connections available. For example, in postgres, the configured connection pool size is for all connections to any database on that server. If you have development, test, and production all pointed at the same database server (obviously different databases), then connections used by test might prevent a production request from being fulfilled. Best not to be stingy.
It is possible to POST binary content to the Yammer messages.json REST endpoint using a MultipartFormDataContent type. A working example posting a number of images, text and tags: WebProxy proxy = new WebProxy() { UseDefaultCredentials = true, }; HttpClientHandler httpClientHandler = new HttpClientHandler() { Proxy = proxy, }; using (var client = new HttpClient(httpClientHandler)) { using (var multipartFormDataContent = new MultipartFormDataContent()) { string body = "Text body of message"; var values = new[] { new KeyValuePair<string, string>("body", body), new KeyValuePair<string, string>("group_id", YammerGroupID), new KeyValuePair<string, string>("topic1", "Topic ABC"), }; foreach (var keyValuePair in values) { multipartFormDataContent.Add(new StringContent(keyValuePair.Value), String.Format("\"{0}\"", keyValuePair.Key)); } int i = 1; foreach (Picture p in PictureList) { var FileName = string.Format("{0}.{1}", p.PictureID.ToString("00000000"), "jpg"); var FilePath = Server.MapPath(string.Format("~/images/{0}", FileName)); if (System.IO.File.Exists(FilePath)) { multipartFormDataContent.Add(new ByteArrayContent(System.IO.File.ReadAllBytes(FilePath)), '"' + "attachment" + i.ToString() + '"', '"' + FileName + '"'); i++; } } var requestUri = "https://www.yammer.com/api/v1/messages.json"; client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", AccessToken); var result = client.PostAsync(requestUri, multipartFormDataContent).Result; } }
I'll try to answer the simpler question in your Post: I would be happy if I could use GIT with key file under Shell Once, that is done, you can build on it. Git for windows uses Openssh and therefore will not be able to use the putty PPK file directly. Two ways forward Option 1: Convert PPK file to OpenSSH format Steps to do so: Open your private key in PuTTYGen Top menu “Conversions”->”Export OpenSSH key”. Save the new OpenSSH key when prompted. To use this new openssh key for your git server, do the following: Open up Git Bash shell and there edit ~/.ssh/config (create ~/.ssh/ if it does not exist) and define this host: Host AuxBurgerGitServer Hostname whatevers-your-git-remote-is User the-git-user IdentityFile ~/.ssh/the-open-ssh-key-exported-before Test this by doing a ssh -T AuxBurgerGitServer which should not show any errors. If you go this way, you should use the HOST defined above whenever referring to any repositories on this host. Therefore, for example, to clone a repo you would do something like: git clone ssh://AuxBurgerGitServer/some-repo-name Option 2: Configure your GIT to use pageant You can load your PPK file in pageant and configure GIT to use pageant for authentication. For this, the only thing you would need is to setup an environment variable like so using: Control Panel → System → Advanced system settings → Environment variables (or on Windows 10: Control Panel → Search → Environment variables) GIT_SSH=c:\Program Files\Putty\plink.exe
Look at the PEM files and you will see one begins with -----BEGIN PRIVATE KEY----- and the other begins with -----BEGIN RSA PRIVATE KEY-----. The words in the BEGIN and END lines of a PEM block specify the format of the data in the block, and these specify two of about 10 (depending exactly how you count) different data formats supported by OpenSSL for RSA keys. The first one is the unencrypted variant of PKCS8 republished as RFC 5208 at section 5. PKCS8 can handle private keys for many different algorithms, including RSA DSA DH and ECDSA, with or without password-based encryption (PBE) of the key. openssl genpkey is designed to handle multiple algorithms and uses the PKCS8 format to do so. The second one is the RSA-only privatekey syntax of PKCS1 republished as RFC3447 et pred in section A.1. This format is written by the older openssl rsa and openssl genrsa functions because they handle only RSA, and is called the 'tradtional' or 'legacy' format to distinguish it from PKCS8. PKCS1 does not define any encrypted format, but OpenSSL supports a generic PEM-encryption scheme that can be applied to this format if requested which you did not. However, the OpenSSL 'legacy' PEM encryption is not as good as that used in PKCS8, so you should normally use PKCS8 if you want security, or possibly PKCS12 instead for a privatekey with certificates. You can convert to PKCS8 DER and back to PEM using pkey which like genpkey handles multiple algorithms and uses PKCS8: openssl pkey -in key.pem [-inform PEM] -out key.der -outform DER openssl pkey -in key.der -inform DER -out xxx.pem [-outform PEM] # now xxx.pem is the same as key.pem Since PEM files (unlike DER) can be recognized by the type in the BEGIN line, you can convert PKCS1 PEM back to PKCS8 directly: openssl pkey -in key2.pem -out yyy.pem # now yyy.pem is the same as key.pem Programs using the OpenSSL library, including but not limited to openssl commandline, can read a privatekey PEM file in either of these formats automatically, and also either of the two encrypted formats automatically if the correct password is provided.
I faced the problem of authentication for my internal service with google apis. Basically exists two method: create the page to accept your application to access the google account create a certificate to authenticate the application with "implicit" approval as i said i'm using the google api for an internal project, so the first option is out of question (the service is not public). Go to https://console.cloud.google.com and create a new project then go to "api manager" then "credentials" then create a "service credential". If you follow all those steps you have a certificate with .p12 extension, it's your key to access to google api (remember you have to enable the key to access the specific google api you want). I paste an example extracted from my project, i'm using google calendar, but the authentication is the same for each service. $client_email = '[email protected]'; $private_key = file_get_contents(__DIR__ . '/../Resources/config/xxxx.p12'); $scopes = array('https://www.googleapis.com/auth/calendar'); $credentials = new \Google_Auth_AssertionCredentials( $client_email, $scopes, $private_key ); $this->client = new \Google_Client(); $this->client->setAssertionCredentials($credentials);
Update 2021: Release 7.1.16 finally implemented encryption of embedded files in otherwise not encrypted pdf documents. (the API changed slightly: in the test for iText 7 below, remove the last parameter of createEmbeddedFileSpec so that it reads PdfFileSpec.createEmbeddedFileSpec(pdf,"attached file".getBytes(),null,"attachment.txt",null,null,null);) Original answer As I didn't get any answers I made some more tests with iText 5.5.9 and iText 7.0.1 and came to the conclusion that not to encrypt embedded file streams with EMBEDDED_FILES_ONLY is a bug in the new version of iText 7. It only worked with iText 5 and ENCRYPTION_AES_256, although Acrobat reader gave a warning that an error existed on this page and it might not display the page correctly. For details see the following table: Following is the code of the minimal, complete, and verifiable examples to produce the pdf files used in the above table with iText 5.5.9 ... package pdfencryptef_itext5; import com.itextpdf.text.Document; import com.itextpdf.text.DocumentException; import com.itextpdf.text.Paragraph; import com.itextpdf.text.pdf.PdfFileSpecification; import com.itextpdf.text.pdf.PdfWriter; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; import java.security.Security; import java.security.cert.Certificate; import java.security.cert.CertificateException; import java.security.cert.CertificateFactory; import java.security.cert.X509Certificate; import org.bouncycastle.jce.provider.BouncyCastleProvider; public class PDFEncryptEF_iText5 { public static void main(String[] args) throws Exception { new PDFEncryptEF_iText5().createPDF("iText5_STD128.pdf", PdfWriter.STANDARD_ENCRYPTION_128); new PDFEncryptEF_iText5().createPDF("iText5_AES128.pdf", PdfWriter.ENCRYPTION_AES_128); new PDFEncryptEF_iText5().createPDF("iText5_AES256.pdf", PdfWriter.ENCRYPTION_AES_256); Security.addProvider(new BouncyCastleProvider()); new PDFEncryptEF_iText5().createPDF("iText5_AES128C.pdf", -PdfWriter.ENCRYPTION_AES_128); new PDFEncryptEF_iText5().createPDF("iText5_AES256C.pdf", -PdfWriter.ENCRYPTION_AES_256); } public void createPDF(String fileName, int encryption ) throws FileNotFoundException, DocumentException, IOException, CertificateException { Document document = new Document(); Document.compress = false; PdfWriter writer = PdfWriter.getInstance(document, new FileOutputStream(fileName)); if( encryption >= 0 ){ writer.setEncryption("secret".getBytes(),"secret".getBytes(), 0, encryption | PdfWriter.EMBEDDED_FILES_ONLY); } else { Certificate cert = getPublicCertificate("MyCert.cer" ); writer.setEncryption( new Certificate[] {cert}, new int[] {0}, -encryption | PdfWriter.EMBEDDED_FILES_ONLY); } writer.setPdfVersion(PdfWriter.VERSION_1_6); document.open(); PdfFileSpecification fs = PdfFileSpecification.fileEmbedded(writer, null, "attachment.txt", "attached file".getBytes(), 0); writer.addFileAttachment( fs ); document.add(new Paragraph("main file")); document.close(); } public Certificate getPublicCertificate(String path) throws IOException, CertificateException { FileInputStream is = new FileInputStream(path); CertificateFactory cf = CertificateFactory.getInstance("X.509"); X509Certificate cert = (X509Certificate) cf.generateCertificate(is); return cert; } } ... and iText 7.0.1: package pdfencryptef_itext7; import com.itextpdf.kernel.pdf.CompressionConstants; import com.itextpdf.kernel.pdf.EncryptionConstants; import com.itextpdf.kernel.pdf.PdfDocument; import com.itextpdf.kernel.pdf.PdfVersion; import com.itextpdf.kernel.pdf.PdfWriter; import com.itextpdf.kernel.pdf.WriterProperties; import com.itextpdf.kernel.pdf.filespec.PdfFileSpec; import com.itextpdf.layout.Document; import com.itextpdf.layout.element.Paragraph; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.IOException; import java.security.Security; import java.security.cert.Certificate; import java.security.cert.CertificateException; import java.security.cert.CertificateFactory; import java.security.cert.X509Certificate; import org.bouncycastle.jce.provider.BouncyCastleProvider; public class PDFEncryptEF_iText7 { public static void main(String[] args) throws Exception { new PDFEncryptEF_iText7().createPDF("iText7_STD128.pdf", EncryptionConstants.STANDARD_ENCRYPTION_128); new PDFEncryptEF_iText7().createPDF("iText7_AES128.pdf", EncryptionConstants.ENCRYPTION_AES_128); new PDFEncryptEF_iText7().createPDF("iText7_AES256.pdf", EncryptionConstants.ENCRYPTION_AES_256); Security.addProvider(new BouncyCastleProvider()); new PDFEncryptEF_iText7().createPDF("iText7_AES128C.pdf", -EncryptionConstants.ENCRYPTION_AES_128); new PDFEncryptEF_iText7().createPDF("iText7_AES256C.pdf", -EncryptionConstants.ENCRYPTION_AES_256); } public void createPDF(String fileName, int encryption ) throws FileNotFoundException, IOException, CertificateException{ PdfWriter writer; if( encryption >= 0 ){ writer = new PdfWriter(fileName, new WriterProperties().setStandardEncryption("secret".getBytes(),"secret".getBytes(), 0, encryption | EncryptionConstants.EMBEDDED_FILES_ONLY) .setPdfVersion(PdfVersion.PDF_1_6)); } else { Certificate cert = getPublicCertificate("MyCert.cer" ); writer = new PdfWriter(fileName, new WriterProperties().setPublicKeyEncryption( new Certificate[] {cert}, new int[] {0}, -encryption | EncryptionConstants.EMBEDDED_FILES_ONLY ) .setPdfVersion(PdfVersion.PDF_1_6)); } writer.setCompressionLevel(CompressionConstants.NO_COMPRESSION); PdfDocument pdf = new PdfDocument(writer); PdfFileSpec fs = PdfFileSpec.createEmbeddedFileSpec(pdf,"attached file".getBytes(),null,"attachment.txt",null,null,null,true); pdf.addFileAttachment("attachment.txt", fs); try (Document doc = new Document(pdf)) { doc.add(new Paragraph("main file")); } } public Certificate getPublicCertificate(String path) throws IOException, CertificateException { FileInputStream is = new FileInputStream(path); CertificateFactory cf = CertificateFactory.getInstance("X.509"); X509Certificate cert = (X509Certificate) cf.generateCertificate(is); return cert; } } I must admit that I'm a bit disappointed that there was no feedback from the iText people to at least the first of my three questions but, hopefully, future versions of iText 7 will correctly process the EMBEDDED_FILES_ONLY flag. As the tests showed, it seems to be far from trivial for both the pdf producer as well as the reader to correctly handle this feature.
Do you want to do TLS client authentication (that is, your Python script needs to authenticate to the server / MQTT broker)? Or do you want your Python script to behave like a web browser, and just validate the server certificate? If you only want the latter, I have had success using the tls_set() method in the Paho Python client when I point it to a PEM file containing the server's certificate. And that's the only argument you need to pass on tls_set() to have the Paho client validate the server certificate, and connect to the broker using TLS. For example: mqttc.tls_set("/home/bob/certificates/mqttbrokercertificate.pem") How do you obtain the mqtt broker's certificate in PEM format? The easiest way is with openssl: openssl s_client -host mqtt.broker.hostname.com -port 8883 -showcerts Redirect the output to a file, and delete every line in the file except what is between the "BEGIN CERTIFICATE" and "END CERTIFICATE" lines (inclusive -- be sure to include these lines as well). This is a good article here on StackOverflow about how to save a server's SSL certificate using openssl: How to save server SSL certificate to a file Lastly, you need to be sure of what version of TLS your broker supports, and make sure your Python client also supports it. For example, the IBM Watson IoT Platform requires TLS 1.2. The ssl module in Python 2.7 (which is built on openssl) does not support TLS 1.2. Generally, you need Python 3.X, and openssl of at least 1.0.1. Here is how you can set the TLS version on the Paho client (don't forget to import ssl): mqttc.tls_set("/home/bob/certificates/mqttbrokercertificate.pem", tls_version=ssl.PROTOCOL_TLSv1_2) If you want TLS client authentication, that's probably better handled in an entirely separate article. But I hope this helps with TLS server authentication using the Paho Python client.
There are several issues with your approach and your code: istream::getline() can't read integers. It can only read into an array of char array. eof is a function and not a property the way you mix << >> to parse data with stringstream is not optimal. with >>st_info[i] you extract a single char from the stringstream, that will overwrite existing information. and certainly other problems. I propose you therefore to use the following skeleton, that reads the file line by line, and parses each line separately using a stringstream. Note that I only use the non-member variant of getline() to read strings instead of arrays of char (this frees me from thinking of buffer overflows): ... char delim; int rank; string names, grades; string line; while (getline(inFile, line)) // read line by line { stringstream sst{line}; // then parse the line using a string stream sst>>rank; // read an integer sst>>delim; // skip white and read a single char if (delim!='{') { cout<<"Error on line: { expected for name instead of "<<delim<<endl; continue; // next line, please !! } getline(sst,names, '}'); // TO DO: error handling sst>>delim; if (delim!='{') { cout<<"Error on line: { expected for grades instead of "<<delim<<endl; continue; // next line, please !! } getline(sst,grades, '}'); // TO DO: additional error handling cout << rank<<" "<<names<<" "<<grades<<endl; // temporary: // TO DO: parse names and grades by using further stringstreams } Online demo Note that I used a simple parsing approach: read a char and check it matches the expected opening character, and use getline() to read until the closing character (the latter being consumed but excluded from the string). This doesn't allow for nested {...} in your format.
You seem to be confusing two concepts: the object.$id of an AngularFire object contains the key of that object in the Firebase Database. a firebaseUser.uid in Firebase terms is the identification of a Firebase Authentication user. It is common to store your Firebase Authentication users in the database under their uid, in which case user.$id would be their uid. But they are still inherently different things. Users uid1 displayName: "makkasi" uid2 displayName: "Frank van Puffelen" So if you we look at the code snippet you shared: return Auth.$requireSignIn().then(function (firebaseUser) { return Users.getProfile(firebaseUser.uid).$loaded().then(function (profile) { The first line requires that the user is signed-in; only then will it execute the next line with the firebaseUser that was signed in. This is a regular JavaScript object (firebase.User), not an AngularFire $firebaseObject. The second line then uses the firebaseUser.uid property (the identification of that user) to load the user's profile from the database into an AngularFire $firebaseObject. Once that profile is loaded, it executes the third line. If the users are stored in the database under their uid, at this stage profile.$id and firebaseUser.uid will be the same value.
Although the question is how to encrypt with DH apis. I want to address the whole problem. Although the accepted answer is good, but the problem is if you don't know how e2e works that tutorial won't help you to achieve what you actually want, which I guess is how to do end-to-end encryption using DH key exchange as a part of the process. So I broke it up into understand-able pieces. It goes like this: As the concept goes, both Alice and Bob should agree on a generator and a prime number to bob, so that key can generate his keys. Having done that, both of them need to share their public keys with each other. So at first, let alice generate the keys and send them to bob: JSON.stringify({ type: 'keyxchange_alice', from: from, to: to, prime: alice.sharedPrime, generator: alice.generator, key: alice.getPublicKey() }) And then bob will need to generate and send his public key to alice const bob = new DeffMan(Buffer.from(msg.prime), Buffer.from(msg.generator)) const bob_key = bob.getPublicKey() JSON.stringify({ type: 'keyxchange_bob', key: bob_key }) Also you will need to store these keys corresponding users, which could be done by storing it (in this case, in a javascript hash/object), like alice can store: { bob: bobMessage.key }. Now given that they have each other's public keys, alice can bob can generate a shared secret, shared secret, for bob, when generalized is alicePublicKey ^ bobPrivateKey. (read more on DiffeHelman key exchange from Wikipedia and a Plain English version here ) This shared secret then will use as a password to encrypt the messages using aes-256-cbc that will be send over tcp. The above thing can be modified more, by regenerating the secret's everytime, which will involve one more roundrtrip for each message Or One could use the Double-Rachet scheme. My original article is in this link as a gist
There are basicly two ways to achive this: 1. Create a new model Artist with a OneToOneField to the django user model. This is most likely what you want. E.g. like this: from django.contrib.auth.models import User class Artist(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE) genres = models.ManyToManyField('myapp.Genre', related_name='artists') class Portrait(models.Model): artist = models.ForeignKey('myapp.Artist', related_name='portraits') class Genre(models.Model): name = models.CharField(max_length=30) 2. Specify a custom User model that inherits from AbstractBaseUser. This is only reccomended if you want to store additional information related to authentication itself. I suggest that you read the documentation on this carefully: https://docs.djangoproject.com/en/1.10/topics/auth/customizing/#extending-the-existing-user-model To create a custom sign-up page you will need to create your own FormView with a custom template e.g. using the django built in UserCreationForm and/or ModelForm. You could extend it with whichever fields you need. There are several ways to achive this depending on your needs.
I'd say that this is not an encryption algorithm but rather a very, very simple hashing algorithm. The hashing is easy. Take a letter from the given string Determine the number for that letter Add that number to the output Restoring the original is not possible, as for every number (apart from 0) there are at least 3 possible characters. For example, another possible decrypted password for your above example would also be: UGE PAQRWOSE GR ONAGJF. And that is also the problem with this algorithm: Unlike other secure hashing algorithms, it reduces the number of tries required to find a valid password for the given hash immensely, because many, many different inputs can create the same output, so trying to crack a password, you have many more chances to hit the right "hash" even when the actual password is wrong. Example: Instead of "THE PASSWORD IS MOBILE" also "UGE PAQRWOSE GR ONAGJF" and many other combinations of letters would be accepted as the correct password. So while this may be nice to teach children about hashing, please don't use this is a real-world application... There is no real "one answer" solution to the challenge you linked. If it is not a fraud, any combination of letters that lead to the given number must be deemed valid solutions. Of course "THE PASSWORD IS MOBILE" is one of them. Without the additional information that the password must be a valid English sentence, this allows for many possible solutions. Unless they accept any combination of letters that leads to the hash 8430727796730470662453 as a solution, I can not take that page seriously.
I encountered with that problem. Here is how I solve this situation; Create a shared service with observable fields as explained in official angular documents In your navigation bar component, subscribe to value from shared service to display navigation bar In login and logout pages, update value. Since you already subscribe the value, subscriber handle this situation by itself Create an authentication service. Add a method similar to this to ask your backend, is request authenticated; //method parameters depend on what you want isAuthorized(url: string, errorCallback: (any) => void) { let body = JSON.stringify(url) return this.http.post('account/isauthorized', body) .map((response: Response) => { //update value to display navigation bar if user is authenticated yourSharedService.observableField.next(true); return true; }) .catch((response: Response) => { errorCallback(response.status); return Observable.of(false); }); } Create an authentication guard and call isAuthorized method in canActivate or canLoad or CanActivateChild. In your callback, handle unauthorized requests. You can redirect user to error pages or remove the navigation bar, and whatever you want. I hope it helps!
Xamarin app sizes are slightly bigger than Objective-C apps as the Mono framework is part of the app bundle. The architectures you build for will also increase the app size as a binary will be created for each one - I suggest that you only build for ARMv7 and ARM64, omitting ARMv7s unless you need to utilise the specific optimisations provided with that architecture. You can potentially reduce the app size by setting the linker options to link all assemblies in your release configuration. Note that setting the linker to none, as stated in your question, will be detrimental to your app size. https://developer.xamarin.com/guides/ios/advanced_topics/linker/#Link_all_assemblies Each nuget package added to your project will also increase the size of your app. However, with the use of the linker, the impact of this will depend on how much of the package you are actually using. Anything unused will be stripped out during the linking process. You can also reduce your app size by looking at what other assets you have included in your app, such as images, and see if you can save space by making them smaller or increasing compression. As a final note, after you upload your app to iTunes Connect, the App Store adds it's own encryption to your app which can significantly increase the overall size. As encryption obfuscates patterns, the resulting compressed app will be larger. The impact varies from app to app. Moving data, such as long strings, or tables, out of code and into external files will make the final download smaller, because those files will be compressed more efficiently.
This will strongly depend on a lot of small details; I'll try not to forget anything, but in theory it should be fine to do so and if certain conditions are met I would not consider it a bad practice. OAuth2 states that access tokens should be opaque to clients, but JWT is just a token format (Learn JSON Web Tokens) and it's usage in other circumstances does not imply the same rules as OAuth2. Also note that getting the information from an additional request has the same end result with the additional overhead of one more call. There would be a slight benefit if permissions are very volatile given you could repeat the calls. However, the important part is more focused on what you mean by the client and how would the client use that information so I'll elaborate on this. Assumptions: the client you mention can be deployed as browser-based application (SPA's), native application or be some server-side component acting as a client. both the server and client are controlled by the same entity. the client and server components can be seen as a single application, that is, for an end-user the fact there's client and server components makes no difference; they use them as a whole. Explanation In this situation the token issued by the server is just a way for the client to later access protected resources without requiring explicit user authentication again; it's a mechanism to maintain a session between the two components. Given the same entity controls both the client and server, it's acceptable to treat the received token as a whitebox instead of a blackbox. The client can then interpret the information in the token and take advantage of it to provide a better experience for the end-user. However, this implies that the server will need to continue to validate the token and it's permissions accordingly; any interpretation of the data by the client is purely to provide optional functionality. Furthermore, for clients deployed to hostile environments like it would be the case for a SPA application the decisions taken by looking into the data must only result in purely aesthetic decisions, as the user could fake the permissions data. For example, you could use it to conditionally hide/disable some user interface just so that the user wouldn't have to click it to find out it wasn't allowed to do so. A good analogy would be Javascript based input validation in web forms; you should do it for better user experience, but the server will need to do it again because the user can bypass the Javascript validation.
Computed properties, by default, observe any changes made to the properties they depend on, and are dynamically updated when they're called. Unless you are invoking or calling that computed property, it will not execute your intended code. Observers, on the other hand, react without invocation, when the property they are watching, changes. But they are often overused, and can easily introduce bugs due to their synchronous nature. You could refactor your observers and computed properties into helper functions that are called directly. This makes them easier to unit test as well. In your controller, you can handle the initial action of logging in, like this: currentUser: Ember.inject.service(), actions: { login() { this.auth({ username: 'Mary' }); }, }, auth(data) { // Send data to server for authentication... // ...upon response, handle the following within the promise's `then` // method, failures caught within `catch`, etc. But for purposes of // demonstration, just mocking this for now... const response = { username: 'Mary', authToken: 'xyz', }; this.get('currentUser').setConsumer(response); }, The current-user service could then set it’s properties, and call a helper function on the action-cable service: actionCable: Ember.inject.service(), authToken: null, username: null, setConsumer(response) { this.set('authToken', response.authToken); this.set('username', response.username); this.get('actionCable').setConsumer(); }, The action-cable service reads properties from currentService, sets the consumerUrl, and calls the cable service to create the consumer: cable: Ember.inject.service(), currentUser: Ember.inject.service(), setConsumer() { var consumerUrl = "ws://localhost:10000/cable"; if (this.get("currentUser.username") !== null) { consumerUrl += "?token=" + (this.get("currentUser.authToken")); } console.log("ACTION CABLE SERVICE, Consumer URL: ", consumerUrl); this.get("cable").createConsumer(consumerUrl); } I’ve created an Ember Twiddle to demonstrate.
EDIT I think the problem is that Azure Token server doesn't accept client credentials sent as an Authorization header. e.g. Authorization: Basic YmE1NTZlYmItZGY2OS00NjBhLWEwMjItNTI0NWQ0MzA2N2UxOmVxVzlqaXRobXF2cVFiVWY5dmxaWnhZN2wwUzZhQ0pHSkExSGt0eUd3N0W6 but that's how Postman's "Get new access token" tool sends it. So it's isn't going to work. If you look at Microsoft's documentation and search for "get a token" you will see it implies that client credentials should be supplied in the body. POST /common/oauth2/v2.0/token HTTP/1.1 Host: login.microsoftonline.com Content-Type: application/x-www-form-urlencoded client_id=535fb089-9ff3-47b6-9bfb-4f1264799865&scope=https%3A%2F%2Fgraph.microsoft.com%2F.default&client_secret=qWgdYAmab0YSkuL1qKv5bPX&grant_type=client_credentials This works fine but seems to contradict the Oauth 2.0 spec which says: The authorization server MUST support the HTTP Basic authentication scheme for authenticating clients that were issued a client password. END EDIT You definitely can get a bearer token back without supplying a resource. Notice that resource isn't even spelt correctly in the postman http body of the previous answer - it's spelt as resrource which is why it's value of https://graph.microsoft.com is ignored and does not match the resource sent back in the response (00000002-0000-0000-c000-000000000000). Although funny enough they both relate to the api graph...but that's a digression. Confusingly there are two ways of supplying client credentials to an Oauth 2.0 server and some servers don't accept both ways! 1 is adding a basic auth header which is set to Base64(ClientId + ":" + ClientSecret) 2 is adding clientId and clientSecret in the body of the request. I guess that's the problem with Oauth 2.0 being a spec rather than a protocol... See - https://www.rfc-editor.org/rfc/rfc6749#section-2.3.1 Postman's Request Token UI (see image below) uses method 1, but Azure auth server expects method 2. I know because I ran fiddler, and could see the request postman put together. If you manually put the client credentials in the body e.g. grant_type=client_credentials&scope=&client_id=ba556ebb-xxxx9-460a-ax2x-5245d43067e1&client_secret=eqW9jighghghgvlZZxY7l0S6aCJGJA1HktyGw7E= and don't use a Basic Auth http header. You can get a bearer token back even without suppyling a resource. This works fine - but obviously that's no good for you in terms of using postman to get and store your tokens!
You need to get the HttpConfiguration instance from the GlobalConfiguration object and call the MapHttpAttributeRoutes() method from inside the RegisterArea() method of the AreaRegistration.cs. public override void RegisterArea(AreaRegistrationContext context) { GlobalConfiguration.Configuration.MapHttpAttributeRoutes(); //... omitted code } Finally you must in the WebApiConfig remove config.MapHttpAttributes() method or you will get duplicate exception. public static class WebApiConfig { public static void Register(HttpConfiguration config) { // Web API configuration and services config.EnableCors(); // Configure Web API to use only bearer token authentication. config.SuppressDefaultHostAuthentication(); config.Filters.Add(new HostAuthenticationFilter(OAuthDefaults.AuthenticationType)); // Web API routes //config.MapHttpAttributeRoutes(); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); } }
For some reason, [System.IO.Compression.ZipFile]::CreateFromDirectory will not create a zip/vsix file that works correctly, even though it will show as installed. The template does not show in the new project UI. Use 7zip instead to create zip files. While I tried to investigate this issue, I did not like that the code was not using fully qualified paths, and the strings were hard to look at. I refactored your code a bit. Based on my testing, this now works as expected. CODE <# http://stackoverflow.com/questions/40462544/powershell-script-to-create-visual-studio-project-template-extension-zip-issue For some reason, [System.IO.Compression.ZipFile]::CreateFromDirectory will not create a zip/vsix file that works correctly, even though it will show as installed. The template does not show in the new project UI. Use 7zip instead to create zip files. #> Set-StrictMode -Version Latest $VerbosePreference = [System.Management.Automation.ActionPreference]::Continue # Makes debugging from ISE easier. if ($PSScriptRoot -eq "") { $root = Split-Path -Parent $psISE.CurrentFile.FullPath } else { $root = $PSScriptRoot } Set-Location $root <# Create a zip file with items under Path in the root of the zip file. #> function New-ZipFile([string]$Path, [string]$FileName) { $zipExe = 'C:\Program Files\7-Zip\7z.exe' $currentLocation = Get-Location Set-Location $Path & $zipExe a -tzip $FileName * -r Set-Location $currentLocation } # Create temporary directories for the zip archives "Extension", "Template" | % {New-Item (Join-Path $root $_) -ItemType Directory} # Build up the contents of the template file $templateContent = @' <?xml version="1.0" encoding="utf-8"?> <VSTemplate Version="3.0.0" Type="Project" xmlns="http://schemas.microsoft.com/developer/vstemplate/2005" xmlns:sdk="http://schemas.microsoft.com/developer/vstemplate-sdkextension/2010"> <TemplateData> <Name>MyExtension</Name> <Description>MyExtension</Description> <Icon>MyExtension.ico</Icon> <ProjectType>CSharp</ProjectType> <ProjectSubType></ProjectSubType> <RequiredFrameworkVersion>2.0</RequiredFrameworkVersion> <SortOrder>1000</SortOrder> <TemplateID>61251892-9605-4816-846b-858352383c38</TemplateID> <CreateNewFolder>true</CreateNewFolder> <DefaultName>MyExtension</DefaultName> <ProvideDefaultName>true</ProvideDefaultName> </TemplateData> <TemplateContent> <Project File="MyExtension.csproj" ReplaceParameters="true"></Project> </TemplateContent> </VSTemplate> '@ # Save the template file $templateContent | Out-File (Join-Path $root "Template\MyExtension.vstemplate") -Encoding "UTF8" #-NoNewline # Build up the contents of the proj file $projContent = @' <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Import Project="$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\Microsoft.Common.props" Condition="Exists('$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\Microsoft.Common.props')" /> <PropertyGroup> <Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration> <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform> <ProductVersion> </ProductVersion> <SchemaVersion>2.0</SchemaVersion> <ProjectGuid>{403C08FA-9E44-4A8A-A757-1662142E1334}</ProjectGuid> <ProjectTypeGuids>{349c5851-65df-11da-9384-00065b846f21};{fae04ec0-301f-11d3-bf4b-00c04f79efbc}</ProjectTypeGuids> <OutputType>Library</OutputType> <AppDesignerFolder>Properties</AppDesignerFolder> <RootNamespace>$safeprojectname$</RootNamespace> <AssemblyName>$safeprojectname$</AssemblyName> <TargetFrameworkVersion>v4.5</TargetFrameworkVersion> <UseIISExpress>false</UseIISExpress> <IISExpressSSLPort /> <IISExpressAnonymousAuthentication /> <IISExpressWindowsAuthentication /> <IISExpressUseClassicPipelineMode /> </PropertyGroup> <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' "> <DebugSymbols>true</DebugSymbols> <DebugType>full</DebugType> <Optimize>false</Optimize> <OutputPath>bin\</OutputPath> <DefineConstants>DEBUG;TRACE</DefineConstants> <ErrorReport>prompt</ErrorReport> <WarningLevel>4</WarningLevel> </PropertyGroup> <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' "> <DebugType>pdbonly</DebugType> <Optimize>true</Optimize> <OutputPath>bin\</OutputPath> <DefineConstants>TRACE</DefineConstants> <ErrorReport>prompt</ErrorReport> <WarningLevel>4</WarningLevel> </PropertyGroup> <ItemGroup> <Reference Include="Microsoft.CSharp" /> <Reference Include="System.ServiceModel" /> <Reference Include="System.Transactions" /> <Reference Include="System.Web.DynamicData" /> <Reference Include="System.Web.Entity" /> <Reference Include="System.Web.ApplicationServices" /> <Reference Include="System.ComponentModel.DataAnnotations" /> <Reference Include="System" /> <Reference Include="System.Data" /> <Reference Include="System.Core" /> <Reference Include="System.Data.DataSetExtensions" /> <Reference Include="System.Web.Extensions" /> <Reference Include="System.Xml.Linq" /> <Reference Include="System.Drawing" /> <Reference Include="System.Web" /> <Reference Include="System.Xml" /> <Reference Include="System.Configuration" /> <Reference Include="System.Web.Services" /> <Reference Include="System.EnterpriseServices" /> </ItemGroup> <PropertyGroup> <VisualStudioVersion Condition="'$(VisualStudioVersion)' == ''">10.0</VisualStudioVersion> <VSToolsPath Condition="'$(VSToolsPath)' == ''">$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)</VSToolsPath> </PropertyGroup> <Import Project="$(MSBuildBinPath)\Microsoft.CSharp.targets" /> <Import Project="$(VSToolsPath)\WebApplications\Microsoft.WebApplication.targets" Condition="'$(VSToolsPath)' != ''" /> <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" Condition="false" /> <ProjectExtensions> <VisualStudio> <FlavorProperties GUID="{349c5851-65df-11da-9384-00065b846f21}"> <WebProjectProperties> <UseIIS>False</UseIIS> <AutoAssignPort>True</AutoAssignPort> <DevelopmentServerPort>58060</DevelopmentServerPort> <DevelopmentServerVPath>/</DevelopmentServerVPath> <IISUrl> </IISUrl> <NTLMAuthentication>False</NTLMAuthentication> <UseCustomServer>True</UseCustomServer> <CustomServerUrl>http://localhost/</CustomServerUrl> <SaveServerSettingsInUserFile>False</SaveServerSettingsInUserFile> </WebProjectProperties> </FlavorProperties> </VisualStudio> </ProjectExtensions> <!-- To modify your build process, add your task inside one of the targets below and uncomment it. Other similar extension points exist, see Microsoft.Common.targets. <Target Name="BeforeBuild"> </Target> <Target Name="AfterBuild"> </Target> --> </Project> '@ # Save the proj file $projContent | Out-File (Join-Path $root "Template\MyExtension.csproj") -Encoding "UTF8" #-NoNewline # Create the template zip file New-Item (Join-Path $root "Extension\ProjectTemplates\CSharp\Web\1033") -ItemType Directory New-ZipFile (Join-Path $root "Template") (Join-Path $root "Extension\ProjectTemplates\CSharp\Web\1033\MyExtension.zip") # Create a content types xml file (an error will be thrown if this does not exist) $conentTypesContent = @' <?xml version="1.0" encoding="utf-8"?><Types xmlns="http://schemas.openxmlformats.org/package/2006/content-types"><Default Extension="vsixmanifest" ContentType="text/xml" /><Default Extension="zip" ContentType="application/zip" /></Types> '@ # Save the content types file $conentTypesContent | Out-File -literalPath (Join-Path $root "Extension\[Content_Types].xml") -Encoding "UTF8" #-NoNewline # Now create an extension manifest for the visual studio template $extensionContent = @' <PackageManifest Version="2.0.0" xmlns="http://schemas.microsoft.com/developer/vsx-schema/2011"> <Metadata> <Identity Id="MyExtension - 1" Version="0.1.0" Language="en-US" Publisher="MyExtension.net Ltd" /> <DisplayName>MyExtension Project Template</DisplayName> <Description xml:space="preserve">MyExtension Project Template Extension</Description> </Metadata> <Installation> <InstallationTarget Id="Microsoft.VisualStudio.Community" Version="[14.0]" /> </Installation> <Dependencies> <Dependency Id="Microsoft.Framework.NDP" DisplayName="Microsoft .NET Framework" Version="[4.5,)" /> </Dependencies> <Assets> <Asset Type="Microsoft.VisualStudio.ProjectTemplate" Path="ProjectTemplates" /> </Assets> </PackageManifest> '@ # Save the extension file $extensionContent | Out-File (Join-Path $root "Extension\extension.vsixmanifest") -Encoding "UTF8" #-NoNewline # Create the extension zip file New-ZipFile (Join-Path $root "Extension") (Join-Path $root "MyExtension.vsix") # Delete the temporary directories "Extension", "Template" | % {Remove-Item (Join-Path $root $_) -Recurse -Force}
Spring Session Spring Session is a great, lesser known project in the Spring portfolio. It easily enables your applications to use a external session store (e.g. Redis) instead of a localized session (e.g. Tomcat). This allows you to leverage load balancers that distribute traffic across multiple servers without losing application state (e.g. logged in/logged out). It also allows you to reboot individual servers without destroying the Users session as well. Restful? Yes, you can use a modified Spring Session configuration to more appropriately use it with Rest endpoints. You'll use HttpBasic to perform the initial authentication, but you'll receive an authorization token which you'll pass in subsequent requests as a Http Header in lieu of the username & password. See the link to the docs for more detail Spring Security OAuth This is a much more complicated setup but there are advantages such as leveraging external Identity Providers (e.g. Google, Facebook). You can also faciliate SSO across multiple applications. I would recommend starting with Spring Session as it is much simpler for a beginner.
First off, Android 4.3 is inherently not secure. There are now multiple exploits including remote ones such as StageFright. Second, I assume you were talking about some-securerandom-thoughts.html, as your link was dead. It's talking about random number generations, but I don't see that in your code. Instead I see AES encryption, which isn't random at all. Also, looking at Potentially insecure random numbers on Android 4.3 and older, this warning may have been related to a previous version of your code, which relied on SecureRandom to initialize the KeyGenerator. Doing a google search for 'aes not secure' brings up all bunch of results and opinions, but it seems secure enough for most people. Having said that, doing a google for 'ecb not secure' brings up Why shouldn't I use ECB encryption?, which aptly demonstrates why it's not secure. But that's not secure on any platform, not just Android 4.3. Hope this helps and please clarify if the warning really came from this code snippet or specify the exact line.
The problem with Laravel 5.3 passport is that unlike previous OAuth 2.0 Server for Laravel library offered by lucadegasperi, it has no API to make clients directly . So as if now the client can only be made through the front-end. FYI we wanted to use laravel passport solely for our mobile app so while creating and registering user we would have only EMAIL & Password and in some cases only Facebook UserID. in the oauth_clients convert the id field into a normal field i.e. remove it as being primary key and make the data type as varchar so that we can store email address as client_ids as they are also unique for your system. Incase of Facebook login we store Facebook user IDs here in this column which again will be unique for each our client. Also for other tables like: oauth_access_tokens, oauth_auth_codes & oauth_personal_access_clients change client_id to VARCHAR(255) so that it can store email addresses or Facebook User IDs. Now go to your models and create a model for oauth_clients table so that you can create client pragmatically from the code while creating users. <?php namespace App; use Illuminate\Database\Eloquent\Model; class oAuthClient extends Model { protected $table = 'oauth_clients'; } Then you in your api.php route file add the following route: Route::post('/register-user', function () { $email= \Illuminate\Support\Facades\Input::get('email'); $password=\Illuminate\Support\Facades\Input::get('password'); $user = new \App\User(array( 'name' =>\Illuminate\Support\Facades\Input::get('name'), 'email' => \Illuminate\Support\Facades\Input::get('email'), 'password' => bcrypt(\Illuminate\Support\Facades\Input::get('password')), )); $user->save(); $oauth_client=new \App\oAuthClient(); $oauth_client->user_id=$user->id; $oauth_client->id=$email; $oauth_client->name=$user->name; $oauth_client->secret=base64_encode(hash_hmac('sha256',$password, 'secret', true)); $oauth_client->password_client=1; $oauth_client->personal_access_client=0; $oauth_client->redirect=''; $oauth_client->revoked=0; $oauth_client->save(); return [ 'message' => 'user successfully created.' ]; }); In the above code snippet you have to note that to generate the oauth_client secret you have to use some strong formula of encryption that you feel comfortable using it with your application. Also use the same technique to generate the secret key on your mobile app for the respective client/user. Now you can use the standard POST API offered by laravel passport to request access token through password grant using "oauth/token" using the following paramters: grant_type : 'password' client_id : '<email with which the user is registered>' client_secret : '<generate the client secret from the mobile app>' username : '<email with which the user is registered>' password : '<password entered by the user>' scope : '<leave empty as default>' The above will give you a response, if everything is correct, similar to : { "token_type": "Bearer", "expires_in": 3155673600, "access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImp0aSI6IjMwZmM0MDk1NWY5YjUwNDViOTUzNDlmZjc2M2ExNDUxOTAxZjc5YTA5YjE4OWM1MjEzOTJlZmNiMDgwOWQzMzQwM2ExZWI4ZmMyODQ1MTE3In0.eyJhdWQiOiJzaHVqYWhtQGdtYWlsLmNvbSIsImp0aSI6IjMwZmM0MDk1NWY5YjUwNDViOTUzNDlmZjc2M2ExNDUxOTAxZjc5YTA5YjE4OWM1MjEzOTJlZmNiMDgwOWQzMzQwM2ExZWI4ZmMyODQ1MTE3IiwiaWF0IjoxNDc4MTQ1NjMyLCJuYmYiOjE0NzgxNDU2MzIsImV4cCI6NDYzMzgxOTIzMiwic3ViIjoiMSIsInNjb3BlcyI6W119.dj3g9b2AdPCK-im5uab-01SP71S7AR96R0FQTKKoaZV7M5ID1pSXDlmZw96o5Bd_Xsy0nUqFsPNRQsLvYaOuHZsP8v9mOVirBXLIBvPcBc6lDRdNXvRidNqeh4JHhJu9a5VzNlJPm3joBYSco4wYzNHs2BPSxXuuD3o63nKRHhuUHB-HwjVxj2GDwzEYXdZmf2ZXOGRJ99DlWGDvWx8xQgMQtd1E9Xk_Rs6Iu8tycjBpKBaC24AKxMI6T8DpelnFmUbMcz-pRsgCWCF_hxv6FpXav3jr1CLhhT58_udBvXjQAXEbtHeB7W_oaMcaqezHdAeOWDcnqREZHsnXHtKt0JpymcTWBkS2cg7sJzy6P9mOGgQ8B4gb8wt44_kHTeWnokk4yPFRZojkHLVZb8YL6hZxLlzgV1jCHUxXoHNe1VKlHArdlV8LAts9pqARZkyBRfwQ8oiTL-2m16FQ_qGg-9vI0Suv7d6_W126afI3LxqDBi8AyqpQzZX1FWmuJLV0QiNM0nzTyokzz7w1ilJP2PxIeUzMRlVaJyA395zq2HjbFEenCkd7bAmTGrgEkyWM6XEq1P7qIC_Ne_pLNAV6DLXUpg9bUWEHhHPXIDYKHS-c3N9fPDt8UVvGI8n0rPMieTN92NsYZ_6OqLNpcm6TrhMNZ9eg5EC0IPySrrv62jE", "refresh_token": "BbwRuDnVfm7tRQk7qSYByFbQKK+shYPDinYA9+q5c/ovIE1xETyWitvq6PU8AHnI5FWb06Nl2BVoBwCHCUmFaeRXQQgYY/i5vIDEQ/TJYFLVPRHDc7CKILF0kMakWKDk7wJdl5J6k5mN38th4pAAZOubiRoZ+2npLC7OSZd5Mq8LCBayzqtyy/QA5MY9ywCgb1PErzrGQhzB3mNhKj7U51ZnYT3nS5nCH7iJkCjaKvd/Hwsx2M6pXnpY45xlDVeTOjZxxaOF/e0+VT2FP2+TZMDRfrSMLBEkpbyX0M/VxunriRJPXTUvl3PW0sVOEa3J7+fbce0XWAKz7PNs3+hcdzD2Av2VHYF7/bJwcDCO77ky0G4JlHjqC0HnnGP2UWI5qR+tCSBga7+M1P3ESjcTCV6G6H+7f8SOSv9FECcJ8J5WUrU+EHrZ95bDtPc9scE4P3OEQaYchlC9GHk2ZoGo5oMJI6YACuRfbGQJNBjdjxvLIrAMrB6DNGDMbH6UZodkpZgQjGVuoCWgFEfLqegHbp34CjwL5ZFJGohV+E87KxedXE6aEseywyjmGLGZwAekjsjNwuxqD2QMb05sg9VkiUPMsvn45K9iCLS5clEKOTwkd+JuWw2IU80pA24aXN64RvOJX5VKMN6CPluJVLdjHeFL55SB7nlDjp15WhoMU1A=" } Hope it helps you! Cheers.
It looks like this could be related to redirect URLs which you can configure in the portal. Looking at the documentation for this. Running locally can cause problems because, by default, App Service authentication is only configured to allow access from your Mobile App backend. Use the following steps to change the App Service settings to enable authentication when running the server locally: Log in to the Azure portal Navigate to your Mobile App backend. Select Resource explorer in the DEVELOPMENT TOOLS menu. Click Go to open the resource explorer for your Mobile App backend in a new tab or window. Expand the config > authsettings node for your app. Click the Edit button to enable editing of the resource. Find the allowedExternalRedirectUrls element, which should be null. Add your URLs in an array: "allowedExternalRedirectUrls": [ "http://localhost:3000","https://localhost:3000"], Replace the URLs in the array with the URLs of your service, which in this example is http://localhost:3000 for the local service. You could also use http://localhost:4400, depending on how your app is configured. At the top of the page, click Read/Write, then click PUT to save your updates. You also need to add the same loopback URLs to the CORS whitelist settings: Navigate back to the Azure portal. Navigate to your Mobile App backend. Click CORS in the API menu. Enter each URL in the empty Allowed Origins text box. A new text box is created. Click SAVE After the backend updates, you will be able to use the new loopback URLs in your app.
A commit, in Git, is never changed. Neither rebase nor git commit --amend ever change any commit, as this is not possible.1 The trick here lies in defining "a commit". How do you know which commit is which? If I say "a commit in the Git repository for Git", well, there are over 40,000 commits in there. Which one do I mean? The unambiguous and definite way for me to tell you is for me to give you the hash ID, e.g., 9b7cbb315923e61bb0c4297c701089f30e116750. That is the true name for one specific commit: $ git cat-file -p 9b7cbb315923e61bb0c4297c701089f30e116750 | sed 's/@/ /' tree 4ba58c32960dcecc1fedede9c9362f5c10158f08 parent 77933f4449b8d6aa7529d627f3c7b55336f491db author Junio C Hamano <gitster pobox.com> 1418845774 -0800 committer Junio C Hamano <gitster pobox.com> 1418845774 -0800 Git 2.2.1 Signed-off-by: Junio C Hamano <gitster pobox.com> This name is permanently attached to this particular commit. It sure is an unwieldy and ugly name, though. Wouldn't it be nice to have a shorter, prettier, wieldy name? And there is one: I can point you to v2.2.1: $ git rev-parse v2.2.1^{commit} 9b7cbb315923e61bb0c4297c701089f30e116750 But in fact, v2.2.1 is not a commit at all, it's a tag. Specifically, it is a tag name (found in refs/tags/v2.2.1 or in the packed-refs file under the name v2.2.1) pointing to an annotated tag object,2 rather than directly to a commit: $ git rev-parse v2.2.1 7c56b20857837de401f79db236651a1bd886fbbb The tag object has the commit ID inside it, plus a whole bunch of additional goop, including a "PGP signature": $ git cat-file -p v2.2.1 | sed 's/@/ /' object 9b7cbb315923e61bb0c4297c701089f30e116750 type commit tag v2.2.1 tagger Junio C Hamano <gitster pobox.com> 1418851265 -0800 Git 2.2.1 -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAABAgAGBQJUkfPBAAoJELC16IaWr+bLjfgP/iA78fk3NkTEROoyIVq6kPDH pZAlm4ObsKXAdl6sFqWe7xFxGExHYzJ5L3qGXs3VM+9Z3iDe2WZN3WbK3aFtYqfU AYRSTpnPzDf4L0vfyqiFS7//+LoeM2TogAV7SLdehMlodsL5HR6FiSz1zffSq8D0 Ci4XpGWHkqXLhfvUPC7foCgGpf7l38gsbJPbdkyKLK9/wtLSfkk45vK+wY6o3CCv JKBFr468958fvw+j73nxiT+Vne7TeL1Bq1kCq9M65dAjOpFjZiD408NaF7jTcNcx TMjdKoVlDNFHcUPMv9B5C308sRVUylmeUzb8XrQNji0+1NA5ivVgDfZsudWUtlTj jo9xku0Np4IdXPwxJNlO5tC2rnof4gdD4jWPJj/DvellNKCDXuLuXDZSKZDI9GSr OzLsad8uFX3MySPe+evIVF6qGS2KzI8PGNrohqWaPkX8cug22EW7lKJFpjYJb5gP 3nJUJvbsrMeyoH/GqxPzA5clqMGtsirnTiapMILNRmlC+3rzc0DkLw90BM6vKNOC eDTOI9Xj1JS9qbD6fEkxVNrXRDz0TFbtpFbFTtKk4zfAc/jTOqE9fqpV7afoQfON e1NwrjR5Kcts7ev23Y0G1WH3t2L0N2/q27kcjrulCEH1vtXlmaZFU6o+WKUVV7iH /YQnjNUOgRxQ1zBGof7h =yJ4Q -----END PGP SIGNATURE----- The PGP signature is what lets us decide whether we believe Junio C Hamano really made and signed this tag. It uses a stronger form of encryption digital signature than SHA-1 (which is good since SHA-1 is, at least in theory, breakable) that also supports both distributed verification, and the ability to revoke signatures (which SHA-1 itself does not). In the end, though, that only helps us if someone we trust and/or can verify has made such a PGP-signed tag, or has PGP-signed a commit. In theory, signing each commit might be a bit stronger since then there's a digital signature directly on the commit; but in practice, signing tags is much more convenient, and just as good since we don't regularly go about breaking SHA-1 (and, at least with current brute-force methods, it would leave obvious marks if we did, though that's way beyond the scope of this answer, and also somewhat beyond me to describe properly—cryptography is not my field). 1Well, it's theoretically possible if you can break the SHA-1 hash. The way Git behaves if you come up with a new, different object that nonetheless produces the same hash means you won't ever pick up this new object if you already have the old one, though. This rule applies to all Git objects (commits, trees, annotated tags, and blobs), all of which are named by their hashes. What git rebase and git commit --amend do, to make it seem like they changed commits, is to make new copies of existing commits, and then shuffle the names around. The new commits have new, different hashes, and since a later (descendant) commit literally contains the hash of its immediate ancestor (parent) commit, "changing" one commit's hash (i.e., copying the commit object to a new, different commit object) forces the change to bubble down through the rest of the commits. We then re-point the existing (short, branch or tag) name to the tip of the new chain. This is why, given an end-point that we believe is trust-able, we can extend that trust to each previous object in the chain or tree. The technical term for this is a Merkle tree. 2This makes it what Git calls an "annotated tag": a tag name (which by itself would be a "lightweight tag") pointing to an annotated-tag object, stored in the Git repository, with the tag object pointing to some other Git object—usually a commit, but perhaps another tag, or even a tree or a blob. However, even "another tag" is somewhat rare—there are just three of these in the Git repository for Git—and the other two are practically unheard-of.
Check if your git installation is using the OSX Keychain to store credentials by running git config --global credential.helper. If that returns osxkeychain then it is. From https://help.github.com/articles/updating-credentials-from-the-osx-keychain/: You'll need to update your saved username and password in the git-credential-osxkeychain helper if you change your password or username on GitHub. In Finder, search for the Keychain Access app. In Keychain Access, search for github.com. GitHub Password Entry in KeychainFind the "internet password" entry for github.com. Edit or delete the entry accordingly. Another thing you might need to check is if you are using 2 factor authentication for the github account you are pushing as? If you are then you might need this from here: When 2FA is enabled If you have two-factor authentication enabled, you must create a personal access token to use as a password when authenticating to GitHub on the command line with HTTPS URLs. For example, when you access a repository using Git on the command line using commands like git clone, git fetch, git pull or git push with HTTPS URLs, you must provide your GitHub username and your personal access token when prompted for a username and password. For more information on setting up two-factor authentication, see "Adding security to your account with two-factor authentication."
I too faced the same issue. I was using the code from IdentityServer4 QuickStart sample from here app.UseGoogleAuthentication(new GoogleOptions { AuthenticationScheme = "Google", DisplayName = "Google", SignInScheme = IdentityServerConstants.ExternalCookieAuthenticationScheme, ClientId = "xxx.apps.googleusercontent.com", ClientSecret = "xxxx-Xxxxxxx" }); I had to change the code to the following to fix the issue. var CookieScheme= app.ApplicationServices.GetRequiredService<IOptions<IdentityOptions>>().Value.Cookies.ExternalCookieAuthenticationScheme; app.UseGoogleAuthentication(new GoogleOptions { AuthenticationScheme = "Google", DisplayName = "Google", SignInScheme = CookieScheme, ClientId = "xxx.apps.googleusercontent.com", ClientSecret = "xxxx-Xxxxxxx" }); Instead of just using the constant 'external' from the IdentityServerConstants.ExternalAUthenticationScheme I had to obtain the scheme used to identify external authentication cookies from the cookie options of the current identity system used by the app. That is what fixed the issue for me.
The TS asked for a working SHA-1 version of the script. However, SHA-1 is outdated and Amazon has datacenters that only accept SHA-256 encryption, hereby the download script that can be used for all S3 datacenters: It also follows HTTP 307 redirects. #!/bin/sh #USAGE: # download-aws.sh <bucket> <region> <source-file> <dest-file> set -e s3Key=xxxxxxxxxxxxxxxxxxxx s3Secret=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx file=$3 bucket=$1 host="${bucket}.s3.amazonaws.com" resource="/${file}" contentType="text/plain" dateValue="`date +'%Y%m%d'`" X_amz_date="`date +'%Y%m%dT%H%M%SZ'`" X_amz_algorithm="AWS4-HMAC-SHA256" awsRegion=$2 awsService="s3" X_amz_credential="$s3Key%2F$dateValue%2F$awsRegion%2F$awsService%2Faws4_request" X_amz_credential_auth="$s3Key/$dateValue/$awsRegion/$awsService/aws4_request" signedHeaders="host;x-amz-algorithm;x-amz-content-sha256;x-amz-credential;x-amz-date" contentHash="e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855" HMAC_SHA256_asckey () { var=`/bin/echo -en $2 | openssl sha256 -hmac $1 -binary | xxd -p -c256` echo $var } HMAC_SHA256 () { var=`/bin/echo -en $2 | openssl dgst -sha256 -mac HMAC -macopt hexkey:$1 -binary | xxd -p -c256` echo $var } REQUEST () { canonicalRequest="GET\n$resource\n\n"\ "host:$1\n"\ "x-amz-algorithm:$X_amz_algorithm""\n"\ "x-amz-content-sha256:$contentHash""\n"\ "x-amz-credential:$X_amz_credential""\n"\ "x-amz-date:$X_amz_date""\n\n"\ "$signedHeaders\n"\ "$contentHash" #echo $canonicalRequest canonicalHash=`/bin/echo -en "$canonicalRequest" | openssl sha256 -binary | xxd -p -c256` stringToSign="$X_amz_algorithm\n$X_amz_date\n$dateValue/$awsRegion/s3/aws4_request\n$canonicalHash" #echo $stringToSign s1=`HMAC_SHA256_asckey "AWS4""$s3Secret" $dateValue` s2=`HMAC_SHA256 "$s1" "$awsRegion"` s3=`HMAC_SHA256 "$s2" "$awsService"` signingKey=`HMAC_SHA256 "$s3" "aws4_request"` signature=`/bin/echo -en $stringToSign | openssl dgst -sha256 -mac HMAC -macopt hexkey:$signingKey -binary | xxd -p -c256` #echo signature authorization="$X_amz_algorithm Credential=$X_amz_credential_auth,SignedHeaders=$signedHeaders,Signature=$signature" result=$(curl --silent -H "Host: $1" -H "X-Amz-Algorithm: $X_amz_algorithm" -H "X-Amz-Content-Sha256: $contentHash" -H "X-Amz-Credential: $X_amz_credential" -H "X-Amz-Date: $X_amz_date" -H "Authorization: $authorization" https://${1}/${file} -o "$2" --write-out "%{http_code}") if [ $result -eq 307 ]; then redirecthost=`cat $2 | sed -n 's:.*<Endpoint>\(.*\)</Endpoint>.*:\1:p'` REQUEST "$redirecthost" "$2" fi } REQUEST "$host" "$4" Tested on Ubuntu If someone knows a solution to remove the HMAC-ASCII step, you're welcome to reply. I got this only working in this way.
There is a Provider (Your backend server) API introduced by Apple in WWDC-2015 & enhanced in 2016 to give more valuable feedback to the server about the push notification. Here is a transcript to that WWDC session. From the transcript: "If a device token has been removed, you will get an HTTP/2 response with status 410, or "removed." It will have a time stamp in payload indicating when APNS last learned that the device token has been removed." APNS Server Response Codes 200 Success 400 Bad request 403 There was an error with the certificate or with the provider authentication token. 405 The request used a bad :method value. Only POST requests are supported. 410 The device token is no longer active for the topic. 413 The notification payload was too large. 429 The server received too many requests for the same device token. 500 Internal server error 503 The server is shutting down and unavailable. Now what I cannot confirm to you is that if iOS removes the device token if app is removed or if notification setting is turned off from App settings without deleting app. "410 does mean the app was uninstalled. The token will remain active if the user disables notification alerts in the app settings. The device will still receive the notification, even if no alert is shown to the user. The server will not know if the user has turned off notification alerts. Only the app knows this." Thanks to Marcus Adams for clartfying this doubt. Here goes the Apple Developer Guide!!! If required, Here is a Paid SDK that can help you with uninstallation tracking.
Here is an improved version of the accepted answer, updated for Angular2 final : import {Injectable} from "@angular/core"; import {Http, Headers, Response, Request, BaseRequestOptions, RequestMethod} from "@angular/http"; import {I18nService} from "../lang-picker/i18n.service"; import {Observable} from "rxjs"; @Injectable() export class HttpClient { constructor(private http: Http, private i18n: I18nService ) {} get(url:string):Observable<Response> { return this.request(url, RequestMethod.Get); } post(url:string, body:any) { return this.request(url, RequestMethod.Post, body); } private request(url:string, method:RequestMethod, body?:any):Observable<Response>{ let headers = new Headers(); this.createAcceptLanguageHeader(headers); let options = new BaseRequestOptions(); options.headers = headers; options.url = url; options.method = method; options.body = body; options.withCredentials = true; let request = new Request(options); return this.http.request(request); } // set the accept-language header using the value from i18n service that holds the language currently selected by the user private createAcceptLanguageHeader(headers:Headers) { headers.append('Accept-Language', this.i18n.getCurrentLang()); } } Of course it should be extended for methods like delete and put if needed (I don't need them yet at this point in my project). The advantage is that there is less duplicated code in the get/post/... methods. Note that in my case I use cookies for authentication. I needed the header for i18n (the Accept-Language header) because many values returned by our API are translated in the user's language. In my app the i18n service holds the language currently selected by the user.
Hoping the following code help to anyone who is still looking for a good piece of cake to get connected to PayPal. As many people, I've been investing a lot of time trying to get my PayPal token access without success, until I found the following: public class PayPalClient { public async Task RequestPayPalToken() { // Discussion about SSL secure channel // http://stackoverflow.com/questions/32994464/could-not-create-ssl-tls-secure-channel-despite-setting-servercertificatevalida ServicePointManager.ServerCertificateValidationCallback += (sender, cert, chain, sslPolicyErrors) => true; ServicePointManager.SecurityProtocol = SecurityProtocolType.Ssl3 | SecurityProtocolType.Tls | SecurityProtocolType.Tls11 | SecurityProtocolType.Tls12; try { // ClientId of your Paypal app API string APIClientId = "**_[your_API_Client_Id]_**"; // secret key of you Paypal app API string APISecret = "**_[your_API_secret]_**"; using (var client = new System.Net.Http.HttpClient()) { var byteArray = Encoding.UTF8.GetBytes(APIClientId + ":" + APISecret); client.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Basic", Convert.ToBase64String(byteArray)); var url = new Uri("https://api.sandbox.paypal.com/v1/oauth2/token", UriKind.Absolute); client.DefaultRequestHeaders.IfModifiedSince = DateTime.UtcNow; var requestParams = new List<KeyValuePair<string, string>> { new KeyValuePair<string, string>("grant_type", "client_credentials") }; var content = new FormUrlEncodedContent(requestParams); var webresponse = await client.PostAsync(url, content); var jsonString = await webresponse.Content.ReadAsStringAsync(); // response will deserialized using Jsonconver var payPalTokenModel = JsonConvert.DeserializeObject<PayPalTokenModel>(jsonString); } } catch (System.Exception ex) { //TODO: Log connection error } } } public class PayPalTokenModel { public string scope { get; set; } public string nonce { get; set; } public string access_token { get; set; } public string token_type { get; set; } public string app_id { get; set; } public int expires_in { get; set; } } This code works pretty well for me, hoping for you too. The credits belong to Patel Harshal who posted his solution here.
Finally I find the answer for my question.It's working fine...I attached the code below. I added the trim audio code in it.It will be useful for those who are trying to merge and trim the audio(swift2.3): func mixAudio() { let currentTime = CFAbsoluteTimeGetCurrent() let composition = AVMutableComposition() let compositionAudioTrack = composition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: kCMPersistentTrackID_Invalid) compositionAudioTrack.preferredVolume = 0.8 let avAsset = AVURLAsset.init(URL: soundFileURL, options: nil) print("\(avAsset)") var tracks = avAsset.tracksWithMediaType(AVMediaTypeAudio) let clipAudioTrack = tracks[0] do { try compositionAudioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, avAsset.duration), ofTrack: clipAudioTrack, atTime: kCMTimeZero) } catch _ { } let compositionAudioTrack1 = composition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: kCMPersistentTrackID_Invalid) compositionAudioTrack.preferredVolume = 0.8 let avAsset1 = AVURLAsset.init(URL: soundFileURL1) print(avAsset1) var tracks1 = avAsset1.tracksWithMediaType(AVMediaTypeAudio) let clipAudioTrack1 = tracks1[0] do { try compositionAudioTrack1.insertTimeRange(CMTimeRangeMake(kCMTimeZero, avAsset1.duration), ofTrack: clipAudioTrack1, atTime: kCMTimeZero) } catch _ { } var paths = NSSearchPathForDirectoriesInDomains(.LibraryDirectory, .UserDomainMask, true) let CachesDirectory = paths[0] let strOutputFilePath = CachesDirectory.stringByAppendingString("/Fav") print(" strOutputFilePath is \n \(strOutputFilePath)") let requiredOutputPath = CachesDirectory.stringByAppendingString("/Fav.m4a") print(" requiredOutputPath is \n \(requiredOutputPath)") soundFile1 = NSURL.fileURLWithPath(requiredOutputPath) print(" OUtput path is \n \(soundFile1)") var audioDuration = avAsset.duration var totalSeconds = CMTimeGetSeconds(audioDuration) var hours = floor(totalSeconds / 3600) var minutes = floor(totalSeconds % 3600 / 60) var seconds = Int64(totalSeconds % 3600 % 60) print("hours = \(hours), minutes = \(minutes), seconds = \(seconds)") let recordSettings:[String : AnyObject] = [ AVFormatIDKey: Int(kAudioFormatMPEG4AAC), AVSampleRateKey: 12000, AVNumberOfChannelsKey: 1, AVEncoderAudioQualityKey: AVAudioQuality.Low.rawValue ] do { audioRecorder = try AVAudioRecorder(URL: soundFile1, settings: recordSettings) audioRecorder!.delegate = self audioRecorder!.meteringEnabled = true audioRecorder!.prepareToRecord() } catch let error as NSError { audioRecorder = nil print(error.localizedDescription) } do { try NSFileManager.defaultManager().removeItemAtURL(soundFile1) } catch _ { } let exporter = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetAppleM4A) exporter!.outputURL = soundFile1 exporter!.outputFileType = AVFileTypeAppleM4A let duration = CMTimeGetSeconds(avAsset1.duration) print(duration) if (duration < 5.0) { print("sound is not long enough") return } // e.g. the first 30 seconds let startTime = CMTimeMake(0, 1) let stopTime = CMTimeMake(seconds,1) let exportTimeRange = CMTimeRangeFromTimeToTime(startTime, stopTime) print(exportTimeRange) exporter!.timeRange = exportTimeRange print(exporter!.timeRange) exporter!.exportAsynchronouslyWithCompletionHandler {() -> Void in print(" OUtput path is \n \(requiredOutputPath)") print("export complete: \(CFAbsoluteTimeGetCurrent() - currentTime)") var url:NSURL? if self.audioRecorder != nil { url = self.audioRecorder!.url } else { url = self.soundFile1! print(url) } print("playing \(url)") do { print(self.soundFile1) print(" OUtput path is \n \(requiredOutputPath)") self.setSessionPlayback() do { self.optData = try NSData(contentsOfURL: self.soundFile1!, options: NSDataReadingOptions.DataReadingMappedIfSafe) print(self.optData) self.recordencryption = self.optData.base64EncodedStringWithOptions(NSDataBase64EncodingOptions()) // print(self.recordencryption) self.myImageUploadRequest() } self.wasteplayer = try AVAudioPlayer(contentsOfURL: self.soundFile1) self.wasteplayer.numberOfLoops = 0 self.wasteplayer.play() } catch _ { } } }
in fact the sinner is OkHttp, but not Retrofit. OkHttp removes all authentication headers on purpose: https://github.com/square/okhttp/blob/7cf6363662c7793c7694c8da0641be0508e04241/okhttp/src/main/java/com/squareup/okhttp/internal/http/HttpEngine.java // When redirecting across hosts, drop all authentication headers. This // is potentially annoying to the application layer since they have no // way to retain them. if (!sameConnection(url)) { requestBuilder.removeHeader("Authorization"); } Here is the discussion of this issue: https://github.com/square/retrofit/issues/977 You could use the OkHttp authenticator. It will get called if there is a 401 error returned. So you could use it to re-authenticate the request. httpClient.authenticator(new Authenticator() { @Override public Request authenticate(Route route, Response response) throws IOException { return response.request().newBuilder() .header("Authorization", "Token " + DataManager.getInstance().getPreferencesManager().getAuthToken()) .build(); } }); However in my case server returned 403 Forbidden instead of 401. And I had to get response.headers().get("Location"); in-place and create and fire another network request: public Call<Response> getMoreBills(@Header("Authorization") String authorization, @Url String nextPage)
[UnmanagedFunctionPointer(CallingConvention.Cdecl)] private delegate ulong SglAuthentA(IntPtr AuthentCode); The delegate declaration is not correct and does not match the api function signature. An ULONG in C is an uint in C#. An ULONG* in C is ambiguous, could be a ref uint or it could be a uint[]. Since you are supposed to pass a 48 byte authentication code, you know it is an array. Fix: private delegate uint SglAuthentA(uint[] authentCode); Be sure to pass the proper authentication code. It is not 5, the array must have 12 elements. If you don't have one then call the manufacturer to acquire one. private const string DLL_Path = @"C:\Users\admin123\Desktop\MyDlls\SglW32.dll"; Do beware that this is not a workaround for not being able to use [DllImport]. Hardcoding the path is a problem, the file is not going to be present in that directory on the user's machine. The DLL itself does not have any dependencies that prevents it from loading, the only plausible reason for having trouble is you just forgetting to copy the DLL into the proper place. There is only one such place, the same directory as your EXE. Fix this the right way, use Project > Add Existing Item > select the DLL. Select the added file in the Solution Explorer window. In the Properties window, change the Copy to Output Directory setting to "Copy if newer". Rebuild your project and note that you'll now get the DLL in your project's bin\Debug directory. Now [DllImport] will work. A caution about the manual, it lists code samples in Visual Basic. Which is in general what you'd normally use as a guide on learning how to use the api. The code is however not VB.NET code, it is VB6 code. Where ever you see Long in the sample code, you should use uint or int instead. Very sloppy, it casts a big question mark on the quality of the product. Something else they don't seem to address at all is how to get your own code secure. Very important when you use a dongle. Beware it is very trivial for anybody to reverse-engineer your authentication code. And worse, to decompile your program and remove the authentication check. You need to use an obfuscator.
Get just the certificate portion from an openssl pem file Don't use OpenSSL. Instead, use cat, awk, sed and redirections. For example: $ ls *.pem DigiCertHighAssuranceEVRootCA.pem $ cat DigiCertHighAssuranceEVRootCA.pem -----BEGIN CERTIFICATE----- MIIDxTCCAq2gAwIBAgIQAqxcJmoLQJuPC3nyrkYldzANBgkqhkiG9w0BAQUFADBs MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 ... Yzi9RKR/5CYrCsSXaQ3pjOLAEFe4yHYSkVXySGnYvCoCWw9E1CAx2/S6cCZdkGCe vEsXCS+0yx5DaMkHJ8HSXPfqIbloEpw8nL+e/IBcm2PN7EeqJSdnoDfzAIJ9VNep +OkuE6N36B9K -----END CERTIFICATE----- Then, whack the first line: $ cat DigiCertHighAssuranceEVRootCA.pem | sed '1,1d' MIIDxTCCAq2gAwIBAgIQAqxcJmoLQJuPC3nyrkYldzANBgkqhkiG9w0BAQUFADBs MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 ... And whack the last line: $ cat DigiCertHighAssuranceEVRootCA.pem | sed '$ d' ... Yzi9RKR/5CYrCsSXaQ3pjOLAEFe4yHYSkVXySGnYvCoCWw9E1CAx2/S6cCZdkGCe vEsXCS+0yx5DaMkHJ8HSXPfqIbloEpw8nL+e/IBcm2PN7EeqJSdnoDfzAIJ9VNep +OkuE6N36B9K Put them together: $ cat DigiCertHighAssuranceEVRootCA.pem | sed '1,1d' | sed '$ d' MIIDxTCCAq2gAwIBAgIQAqxcJmoLQJuPC3nyrkYldzANBgkqhkiG9w0BAQUFADBs MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 ... Yzi9RKR/5CYrCsSXaQ3pjOLAEFe4yHYSkVXySGnYvCoCWw9E1CAx2/S6cCZdkGCe vEsXCS+0yx5DaMkHJ8HSXPfqIbloEpw8nL+e/IBcm2PN7EeqJSdnoDfzAIJ9VNep +OkuE6N36B9K Finally, you can fold the sed commands by sparating the separate commands with a semi-colon: $ cat DigiCertHighAssuranceEVRootCA.pem | sed '1,1d;$ d' MIIDxTCCAq2gAwIBAgIQAqxcJmoLQJuPC3nyrkYldzANBgkqhkiG9w0BAQUFADBs MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMRkwFwYDVQQLExB3 d3cuZGlnaWNlcnQuY29tMSswKQYDVQQDEyJEaWdpQ2VydCBIaWdoIEFzc3VyYW5j ZSBFViBSb290IENBMB4XDTA2MTExMDAwMDAwMFoXDTMxMTExMDAwMDAwMFowbDEL MAkGA1UEBhMCVVMxFTATBgNVBAoTDERpZ2lDZXJ0IEluYzEZMBcGA1UECxMQd3d3 LmRpZ2ljZXJ0LmNvbTErMCkGA1UEAxMiRGlnaUNlcnQgSGlnaCBBc3N1cmFuY2Ug RVYgUm9vdCBDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMbM5XPm +9S75S0tMqbf5YE/yc0lSbZxKsPVlDRnogocsF9ppkCxxLeyj9CYpKlBWTrT3JTW PNt0OKRKzE0lgvdKpVMSOO7zSW1xkX5jtqumX8OkhPhPYlG++MXs2ziS4wblCJEM xChBVfvLWokVfnHoNb9Ncgk9vjo4UFt3MRuNs8ckRZqnrG0AFFoEt7oT61EKmEFB Ik5lYYeBQVCmeVyJ3hlKV9Uu5l0cUyx+mM0aBhakaHPQNAQTXKFx01p8VdteZOE3 hzBWBOURtCmAEvF5OYiiAhF8J2a3iLd48soKqDirCmTCv2ZdlYTBoSUeh10aUAsg EsxBu24LUTi4S8sCAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgGGMA8GA1UdEwEB/wQF MAMBAf8wHQYDVR0OBBYEFLE+w2kD+L9HAdSYJhoIAu9jZCvDMB8GA1UdIwQYMBaA FLE+w2kD+L9HAdSYJhoIAu9jZCvDMA0GCSqGSIb3DQEBBQUAA4IBAQAcGgaX3Nec nzyIZgYIVyHbIUf4KmeqvxgydkAQV8GK83rZEWWONfqe/EW1ntlMMUu4kehDLI6z eM7b41N5cdblIZQB2lWHmiRk9opmzN6cN82oNLFpmyPInngiK3BD41VHMWEZ71jF hS9OMPagMRYjyOfiZRYzy78aG6A9+MpeizGLYAiJLQwGXFK3xPkKmNEVX58Svnw2 Yzi9RKR/5CYrCsSXaQ3pjOLAEFe4yHYSkVXySGnYvCoCWw9E1CAx2/S6cCZdkGCe vEsXCS+0yx5DaMkHJ8HSXPfqIbloEpw8nL+e/IBcm2PN7EeqJSdnoDfzAIJ9VNep +OkuE6N36B9K Now, I'm not sure what you were trying to do with -certopt X, so take this with a grain of salt... To print the certificate in readable form, use -text -noout: $ cat DigiCertHighAssuranceEVRootCA.pem | openssl x509 -text -noout Certificate: Data: Version: 3 (0x2) Serial Number: 02:ac:5c:26:6a:0b:40:9b:8f:0b:79:f2:ae:46:25:77 Signature Algorithm: sha1WithRSAEncryption Issuer: C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert High Assurance EV Root CA Validity Not Before: Nov 10 00:00:00 2006 GMT Not After : Nov 10 00:00:00 2031 GMT Subject: C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert High Assurance EV Root CA Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:c6:cc:e5:73:e6:fb:d4:bb:e5:2d:2d:32:a6:df: e5:81:3f:c9:cd:25:49:b6:71:2a:c3:d5:94:34:67: a2:0a:1c:b0:5f:69:a6:40:b1:c4:b7:b2:8f:d0:98: a4:a9:41:59:3a:d3:dc:94:d6:3c:db:74:38:a4:4a: cc:4d:25:82:f7:4a:a5:53:12:38:ee:f3:49:6d:71: 91:7e:63:b6:ab:a6:5f:c3:a4:84:f8:4f:62:51:be: f8:c5:ec:db:38:92:e3:06:e5:08:91:0c:c4:28:41: 55:fb:cb:5a:89:15:7e:71:e8:35:bf:4d:72:09:3d: be:3a:38:50:5b:77:31:1b:8d:b3:c7:24:45:9a:a7: ac:6d:00:14:5a:04:b7:ba:13:eb:51:0a:98:41:41: 22:4e:65:61:87:81:41:50:a6:79:5c:89:de:19:4a: 57:d5:2e:e6:5d:1c:53:2c:7e:98:cd:1a:06:16:a4: 68:73:d0:34:04:13:5c:a1:71:d3:5a:7c:55:db:5e: 64:e1:37:87:30:56:04:e5:11:b4:29:80:12:f1:79: 39:88:a2:02:11:7c:27:66:b7:88:b7:78:f2:ca:0a: a8:38:ab:0a:64:c2:bf:66:5d:95:84:c1:a1:25:1e: 87:5d:1a:50:0b:20:12:cc:41:bb:6e:0b:51:38:b8: 4b:cb Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Certificate Sign, CRL Sign X509v3 Basic Constraints: critical CA:TRUE X509v3 Subject Key Identifier: B1:3E:C3:69:03:F8:BF:47:01:D4:98:26:1A:08:02:EF:63:64:2B:C3 X509v3 Authority Key Identifier: keyid:B1:3E:C3:69:03:F8:BF:47:01:D4:98:26:1A:08:02:EF:63:64:2B:C3 Signature Algorithm: sha1WithRSAEncryption 1c:1a:06:97:dc:d7:9c:9f:3c:88:66:06:08:57:21:db:21:47: f8:2a:67:aa:bf:18:32:76:40:10:57:c1:8a:f3:7a:d9:11:65: 8e:35:fa:9e:fc:45:b5:9e:d9:4c:31:4b:b8:91:e8:43:2c:8e: b3:78:ce:db:e3:53:79:71:d6:e5:21:94:01:da:55:87:9a:24: 64:f6:8a:66:cc:de:9c:37:cd:a8:34:b1:69:9b:23:c8:9e:78: 22:2b:70:43:e3:55:47:31:61:19:ef:58:c5:85:2f:4e:30:f6: a0:31:16:23:c8:e7:e2:65:16:33:cb:bf:1a:1b:a0:3d:f8:ca: 5e:8b:31:8b:60:08:89:2d:0c:06:5c:52:b7:c4:f9:0a:98:d1: 15:5f:9f:12:be:7c:36:63:38:bd:44:a4:7f:e4:26:2b:0a:c4: 97:69:0d:e9:8c:e2:c0:10:57:b8:c8:76:12:91:55:f2:48:69: d8:bc:2a:02:5b:0f:44:d4:20:31:db:f4:ba:70:26:5d:90:60: 9e:bc:4b:17:09:2f:b4:cb:1e:43:68:c9:07:27:c1:d2:5c:f7: ea:21:b9:68:12:9c:3c:9c:bf:9e:fc:80:5c:9b:63:cd:ec:47: aa:25:27:67:a0:37:f3:00:82:7d:54:d7:a9:f8:e9:2e:13:a3: 77:e8:1f:4a You can also use the openssl x509 utility to open the file for you: $ openssl x509 -in DigiCertHighAssuranceEVRootCA.pem -inform PEM -text -noout Certificate: Data: Version: 3 (0x2) Serial Number: 02:ac:5c:26:6a:0b:40:9b:8f:0b:79:f2:ae:46:25:77 ... And convert from PEM to DER: $ openssl x509 -in DigiCertHighAssuranceEVRootCA.pem -inform PEM \ -out DigiCertHighAssuranceEVRootCA.der -outform DER $ dumpasn1 DigiCertHighAssuranceEVRootCA.der 0 965: SEQUENCE { 4 685: SEQUENCE { 8 3: [0] { 10 1: INTEGER 2 : } 13 16: INTEGER 02 AC 5C 26 6A 0B 40 9B 8F 0B 79 F2 AE 46 25 77 ... Is there any way of outputting just the encoded certificate using OpenSSL? How is it done? No. OpenSSL has -outform in addition to -inform. There are three inform's and outform's available: DER, PEM and NET. None of them are naked Base64. Also see the openssl x509 man page.
Difference among NSURLConnection, NSURLSession The entire model is different. NSURLSession is designed around the assumption that you'll have a lot of requests that need similar configuration (standard sets of headers, etc.), and makes life much easier if you do. NSURLSession also provides support for background downloads, which make it possible to continue downloading resources while your app isn't running (or when it is in the background on iOS). For some use cases, this is also a major win. NSURLSession also provide a grouping of related requests, making it easy to cancel all of the requests associated with a particular work unit, such as canceling all loads associated with loading a web page when the user closes the window or tab. NSURLSession also provides nicer interfaces for requesting data using blocks, in that it allows you to combine them with delegate methods for doing custom authentication handling, redirect handling, etc., whereas with NSURLConnection if you suddenly realized you needed to do those things, you had to refactor your code to not use block-based callbacks. Difference among NSURLConnection, NSURLSession and AFNetworking? NSURLConnection and NSURLRequest are the provided Cocoa classes for managing connections. In iOS 7, Apple added NSURLSession. But I think you'll find AFNetworking to be a framework that further simplifies network requests (notably complex HTTP requests). If you don't want to use third-party frameworks for any reason, you can use NSURLConnection and/or NSURLSession directly. It just takes a little more coding. For information on NSURLConnection and NSURLSession see the URL Loading System Programming Guide. Find from reference1 and reference2 Thank rob and dgatwood for such an amazing Answer...
The idea of a secure world is to keep the code executing there as small and as simple as possible - the bare minimum to fulfil its duties (usually controlling access to some resource like encryption keys or hardware or facilitating some secure functions like encryption/decryption). Because the amount of code in the secure world is small, it can be audited easily and there's reduced surface area for bugs to be introduced. However, it does not mean that the secure world is automatically 'secure'. If there is a vulnerability in the secure world code, it can be exploited just like any other security vulnerability. Contrast this with code executing in the normal world. For example, the Linux kernel is much more complex and much harder to audit. There are plenty of examples of kernel vulnerabilities and exploits that allow malicious code to take over the kernel. To illustrate this point, let's suppose you have a system where users can pay money via some challenge-response transaction system. When they want to make a transaction, the device must wait for the user to press a physical button before it signs the transaction with a cryptographic key and authorises the payment. But what if some malicious code exploited a kernel bug and is able to run arbitrary code in kernel mode? Normally this means total defeat. The malware is able to bypass all control mechanisms and read out the signing keys. Now the malware can make payments to anyone it wants without even needing the user to press a button. What if there was a way that allows for signing transactions without the Linux kernel knowing the actual key? Enter the secure world system. We can have a small secure world OS with the sole purpose of signing transactions and holding onto the signing key. However, it will refuse to sign a transaction unless the user presses a special button. It's a very small OS (in the kilobytes) and you've hired people to audit it. For all intents and purposes, there are no bugs or security vulnerabilities in the secure world OS. When the normal world OS (e.g. Linux) needs to sign a transaction, it makes a SMC call to transfer control to the secure world (note, the normal world is not allowed to modify/read the secure world at all) with the transaction it wants to sign. The secure world OS will wait for a button press from the user, sign the transaction, then transfer control back to normal world. Now, imagine the same situation where malware has taken over the Linux kernel. The malware now can't read the signing key because it's in the secure world. The malware can't sign transactions without the user's consent since the secure world OS will refuse to sign a transaction unless the user presses the button. This kind of use case is what the secure world is designed for. The whole idea is the hardware enforced separation between the secure and normal world. From the normal world, there is no way to directly tamper with the secure world because the hardware guarantees that. I haven't worked with TrustZone in particular but I imagine once the secure world OS has booted, there is no way to directly modify it. I don't think application developers should be able to 'add' services to the secure world OS since that would defeat the purpose of it. I haven't seen any vendors allowing third parties to add code to their secure world OS. To answer your last question, I've already answered it in an answer here. SMC exceptions are how you request a service from the secure world OS - they're basically system calls but for the secure world OS. What would malicious code gain by transferring control to the secure world? You cannot modify/read the secure world from the normal world When you transfer control to the secure world, you lose control in the normal world
I ran into this problem when I tried to take a forms authentication cookie created by an ASP.NET 2.0 app and decrypt it inside an .NET4.5 Web API project. The solution was to add an attribute called "compatibilityMode" to the "machineKey" node inside my web api's web.config file: <machineKey ... compatibilityMode="Framework20SP2"/> Documentation: https://msdn.microsoft.com/en-us/library/system.web.configuration.machinekeysection.compatibilitymode.aspx And from the doc, here are the allowed values for that attribute: Framework20SP1. This value specifies that ASP.NET uses encryption methods that were available in versions of ASP.NET earlier than 2.0 SP2. Use this value for all servers in a web farm if any server has a version of the .NET Framework earlier than 2.0 SP2. This is the default value unless the application Web.config file has the targetFramework attribute of the httpRuntime element set to "4.5". Framework20SP2. This value specifies that ASP.NET uses upgraded encryption methods that were introduced in the .NET Framework 2.0 SP2. Use this value for all servers in a web farm if all servers have the .NET Framework 2.0 SP2 or later but at least one does not have the .NET Framework 4.5. Framework45. Cryptographic enhancements for ASP.NET 4.5 are in effect. This is the default value if the application Web.config file has the targetFramework attribute of the httpRuntime element set to "4.5".
The above answer by supermonk clarifies most of the places to check. I faced the similar problem as the OP and the mistake was not in the broker configuration but the client side configuration. In the official documentation, although they implicitly mentioned to create the client.keystore as step 1, I missed signing the certificate with the CA as done for the server.keystore. This was causing the Kafka broker to refuse the connection from the clients (producer/consumer). Performing these two steps has eliminated the problem in my case. keytool -keystore kafka.client.keystore.jks -alias localhost -certreq -file cert-file openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days $VALIDITY -CAcreateserial -passin pass:$PASSWORD keytool -keystore kafka.client.keystore.jks -alias CARoot -import -file ca-cert keytool -keystore kafka.client.keystore.jks -alias localhost -import -file cert-signed This will sign the certificate using the CA-cert and add the CARoot as well as signed certificates to the client.keystore. Reference: Confluent blog on securing Apache Kafka
Resolving the incompatible encryption preference error I confirm that yonivav was on the right track when encountering the following error during connection of peers: [MCSession] Peer [DisplayName] has incompatible encryption preference [Required]. However, setting the session encryption preference to .none did not work for me. At https://developer.apple.com/reference/multipeerconnectivity/mcsession/1407000-init it is stated that On apps linked on or after iOS 9, the encryption is set to required. On apps linked prior to iOS 9, the encryption is set to optional. Since I was using one client at iOS 10.1 and another client with a lower iOS version, I initialized the session using var session = MCSession(peer: ownPeerID, securityIdentity: nil, encryptionPreference: .optional) and the connection works reliably again. Bluetooth issues However, I must confirm that the connection is not established using Bluetooth only. The invitation is sent and accepted, the connection state changes to connecting and then to not connected 10 seconds later. Right after changing the state to connecting a [ViceroyTrace] [ICE][ERROR] ICEStopConnectivityCheck() found no ICE check with call id (108154439) error is thrown. If I turn on Wifi and Bluetooth on the the iOS 10.1 device, the Bluetooth only device is discovered, followed by a dozen [ViceroyTrace] [ICE][ERROR] Send BINDING_REQUEST failed(C01A0041). errors and a connection state change to not connected. Update to iOS 10.1.1: still broken I updated the iPhone from iOS 10.1 to 10.1.1, and the errors still persist, no changes at all. Update to iOS 10.2.1: seems to work! After the update from 10.2 (where it was still broken) to 10.2.1, it seems to work again (tested with one device using 10.2.1, the other device was an old iOS 8 device. A colleague tested with 10.2.1 and 10.2 and oddly it worked too)! The connection is established when using Bluetooth only (disabling WiFi). However, at times I still get all the ICE-errors and connection errors in the log, BUT not always. Right now I tried to reproduce them and it runs without warnings. Strange, but the good news is: it seems like Apple fixed the issue!
@dimas's answer is not logically consistent with your question; ifAllGranted cannot be directly replaced with hasAnyRole. From the Spring Security 3—>4 migration guide: Old: <sec:authorize ifAllGranted="ROLE_ADMIN,ROLE_USER"> <p>Must have ROLE_ADMIN and ROLE_USER</p> </sec:authorize> New (SPeL): <sec:authorize access="hasRole('ROLE_ADMIN') and hasRole('ROLE_USER')"> <p>Must have ROLE_ADMIN and ROLE_USER</p> </sec:authorize> Replacing ifAllGranted directly with hasAnyRole will cause spring to evaluate the statement using an OR instead of an AND. That is, hasAnyRole will return true if the authenticated principal contains at least one of the specified roles, whereas Spring's (now deprecated as of Spring Security 4) ifAllGranted method only returned true if the authenticated principal contained all of the specified roles. TL;DR: To replicate the behavior of ifAllGranted using Spring Security Taglib's new authentication Expression Language, the hasRole('ROLE_1') and hasRole('ROLE_2') pattern needs to be used.
Since ios 9 there is touchIDAuthenticationAllowableReuseDuration for the context The duration for which Touch ID authentication reuse is allowable. If the device was successfully authenticated using Touch ID within the specified time interval, then authentication for the receiver succeeds automatically, without prompting the user for Touch ID. The default value is 0, meaning that Touch ID authentication cannot be reused. The maximum allowable duration for Touch ID authentication reuse is specified by the LATouchIDAuthenticationMaximumAllowableReuseDuration constant. You cannot specify a longer duration by setting this property to a value greater than this constant. Availability iOS (9.0 and later), macOS (10.12 and later) if you set for example to 60 context.touchIDAuthenticationAllowableReuseDuration = 60 It will auto succeed without checking, if the user has successfully passed the touch id checking in the last 60 secs. So, you can set to the value that suites you. I find it a good very good and it's annoying to ask the user to touch again while he just did it a few seconds ago.(to unlock the screen for example).
Laravel calls the render function of App\Exceptions\Handler class. So overriding it will not work. You have to add it in App\Exceptions\Handler class only. For example: <?php namespace App\Exceptions; use Exception; use Illuminate\Auth\AuthenticationException; use App\Project\Frontend\Repo\Vehicle\EloquentVehicle; use Illuminate\Foundation\Exceptions\Handler as ExceptionHandler; class Handler extends ExceptionHandler { /** * A list of the exception types that should not be reported. * * @var array */ protected $dontReport = [ \Illuminate\Auth\AuthenticationException::class, \Illuminate\Auth\Access\AuthorizationException::class, \Symfony\Component\HttpKernel\Exception\HttpException::class, \Illuminate\Database\Eloquent\ModelNotFoundException::class, \Illuminate\Session\TokenMismatchException::class, \Illuminate\Validation\ValidationException::class, ]; /** * Report or log an exception. * * This is a great spot to send exceptions to Sentry, Bugsnag, etc. * * @param \Exception $exception * @return void */ public function report(Exception $exception) { parent::report($exception); } /** * Render an exception into an HTTP response. * * @param \Illuminate\Http\Request $request * @param \Exception $exception * @return \Illuminate\Http\Response */ public function render($request, Exception $exception) { if($exception instanceof CustomException) { return $this->showCustomErrorPage(); } return parent::render($request, $exception); } protected function showCustomErrorPage() { $recentlyAdded = app(EloquentVehicle::class)->fetchLatestVehicles(0, 12); return view()->make('errors.404Custom')->with('recentlyAdded', $recentlyAdded); } }
There are two options here. I prefer option #2 but I'll start with #1. Option 1: configure CORS correctly Often 405 errors in CouchDB are due to misconfigurations, e.g. not including all the possible headers and methods, of which there are a lot if you want to support all browsers/devices. On the PouchDB team we've gathered the "best practices" together into a single module: add-cors-to-couchdb, which should work for both CouchDB 1.6.1 and CouchDB 2.0. Just run: npm install --global add-cors-to-couchdb add-cors-to-couchdb http://example.com:5984 -u admin_username -p admin_password This should fix your problem; if not, check out the tests for PouchDB or for pouchdb-authentication which successfully use this method to test against a database running at localhost:5984 (including changing a user's password, which is what you're trying to do). Option 2: avoid CORS using a reverse proxy This is really the best option. It's better for a few reasons: Spend a couple minutes configuring Apache/nginx to avoid CORS altogether, save yourself headaches later trying to get CORS to work correctly CORS is less performant than no-CORS because the browser needs to do preflight OPTIONS requests, which add extra latency especially during replication This is also easier in mobile hybrid apps; Cordova has a one-liner option to whitelist certain domains and avoid CORS. I typically use the Nginx as a reverse proxy guide for CouchDB and route to a database running at example.com/couchdb. E.g.: location /couchdb { rewrite /couchdb/(.*) /$1 break; proxy_pass http://localhost:5984; proxy_redirect off; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } Note that this may break /_utils (Futon/Fauxton), but the best bet in those cases is to set up a reverse tunnel to your server via SSH and look at it locally: ssh -L3000:localhost:5984 [email protected] # now open localhost:3000/_utils in a browser You probably don't want Futon/Fauxton exposed to the world anyway. The advantages here are that you can block off certain methods or certain parts of CouchDB using Nginx/Apache, which is typically more flexible than the HTTP options available in CouchDB.
It requires iteration with list of files. Based on this, the code fetches the title of file and url link of the each files with in the folder. The code is adjustable to get the specific folder by supplying the id of the folder such as ListFolder('id'). The given below example is querying the root #!/usr/bin/python # -*- coding: utf-8 -*- from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive gauth = GoogleAuth() gauth.LocalWebserverAuth() # Creates local webserver and auto handles authentication #Make GoogleDrive instance with Authenticated GoogleAuth instance drive = GoogleDrive(gauth) def ListFolder(parent): filelist=[] file_list = drive.ListFile({'q': "'%s' in parents and trashed=false" % parent}).GetList() for f in file_list: if f['mimeType']=='application/vnd.google-apps.folder': # if folder filelist.append({"id":f['id'],"title":f['title'],"list":ListFolder(f['id'])}) else: filelist.append({"title":f['title'],"title1":f['alternateLink']}) return filelist ListFolder('root')
I was finally able to found a real solution when using .net 4.5. This code allows you to use a custom validator only for a specific WCF client. It has been tested against BasicHttpBinding with BasicHttpSecurityMode.Transport. There is a new property named SslCertificateAuthentication in ClientBase.ClientCredentials.ServiceCertificate. You can initialize this property with a X509ServiceCertificateAuthentication where you can provide a custom X509CertificateValidator. For example: // initialize the ssl certificate authentication client.ClientCredentials.ServiceCertificate.SslCertificateAuthentication = new X509ServiceCertificateAuthentication() { CertificateValidationMode = X509CertificateValidationMode.Custom, CustomCertificateValidator = new CustomValidator(serverCert) }; // simple custom validator, only valid against a specific thumbprint class CustomValidator : X509CertificateValidator { private readonly X509Certificate2 knownCertificate; public CustomValidator(X509Certificate2 knownCertificate) { this.knownCertificate = knownCertificate; } public override void Validate(X509Certificate2 certificate) { if (this.knownCertificate.Thumbprint != certificate.Thumbprint) { throw new SecurityTokenValidationException("Unknown certificate"); } } }
Because the marked answer was noted as correct I feel it necessary to note some key points that many I think would agree with: You almost NEVER want put server process logic of that kind within your routes directory. Especially when working to create an API with the intent to put it into production. It's a dirty route to take and not entirely safe. UNLESS it's for things that are safe to process within your routes directory. Like, on a lesser scale, the base logic for sending a notification (SMS,email,push,slack) to staff members about a new letter/blog/memo being published as an example. ALWAYS attempt to leverage and make use of as much of a framework's features as possible before attempting to "hackishly" accomplish a task that may have been accomplished multiple times before. Ensure that you're doing the proper research about something that has been accomplished already. That way it makes it easier to simply reference a video or tutorial that shows how to properly do what someone is trying to do. That being said, a good starting point would be to watch the following video that perfectly describes the basics of how to properly set up what you're looking to set up: https://laracasts.com/series/whats-new-in-laravel-5-3/episodes/13 In many respects, the video tutorial is very well done and thorough from start to finish. Be sure to brush up on the different Grant_Types for OAuth2.0 as well so you'll have a better understanding as to what specific type you/your application need based on your application's position to consume the api: https://www.digitalocean.com/community/tutorials/an-introduction-to-oauth-2 In addition, be sure to USE laravel's out-of-the-box features for login and register when creating or logging in users. The Controllers are built for you when you perform the following in your console: php artisan make:auth Aside from that, if passport is some-what of a mystery, you can always pull in laravel/socialite package (https://github.com/laravel/socialite). It will allow you to "Log in with (Social Network Here)". Provided that is the route you're also aiming to go. END NOTE: The piece I saw in your question that stuck out the most was how a person will register but will not login with facebook. Instead will have an access token to hit various API endpoints. So if I'm getting what you're saying right, you're aiming to use a user's data from facebook when data is returned, the user is considered logged in and is to be issued an access token. SO: Use socialite to send a "login with facebook" request to facebook. This will get the user's data and leverage a bit of facebook's process of authentication. When a request is returned with user data within the body run it through a check to ensure that there is data (a simply if statement should be fine). Since facebook will have already authenticated that user and the sent credentials you should be good to go. You can either fire off an internal proxy within your Login Controller (which is the cleaner and safer way to do it) or you can issue a JWT (Which is covered in the last 5 minutes of the video posted in this answer above). Below is some example code to get you started. App\Http\Controllers\Auth\LoginController.php class LoginController extends Controller { // ... protected function authenticateClient(Request $request) { $credentials = $this->credentials($request); $data = $request->all(); $user = User::where('email', $credentials['email'])->first(); $request->request->add([ 'grant_type' => $data['grant_type'], 'client_id' => $data['client_id'], 'client_secret' => $data['client_secret'], 'username' => $credentials['email'], 'password' => $credentials['password'], 'scope' => null, ]); $proxy = Request::create( 'oauth/token', 'POST' ); return Route::dispatch($proxy); } protected function authenticated(Request $request, $user) { return $this->authenticateClient($request); } protected function sendLoginResponse(Request $request) { $request->session()->regenerate(); $this->clearLoginAttempts($request); return $this->authenticated($request, $this->guard()->user()); } public function login(Request $request) { if ($this->guard('api')->attempt($credentials, $request->has('remember'))) { return $this->sendLoginResponse($request); } } } The code above is used IN CASE you're aiming to use the Password Grant type for authenticating clients through passport. However, I would seriously look at the tutorial video before jumping the gun on anything. It WILL help you out a lot with how to use laravel 5.3 with passport.
So things seem to have changed quite a lot recently and the Pi comes with Bluez 5.23 in 2016. Having just spent two days, these steps have solved it for my pi but might help for any Debian Jessie install. I hope so. Tested on a new pi, running jessie with fresh install just now. This will give a bluetooth pan bridged to your eth0 network (and thus use your existing dhcp/dns server etc). This is my first post, so please forgive stupidity around the various conventions here. I hope this helps someone and saves you a little time. This is prbably not an optimal solution (I'm no guru), and I'd love to hear about some improvements. Install some things (python stuff will help with scritps): sudo apt-get install bridge-utils bluez python-dbus python-gobject Download two very cool python scripts, put them in /usr/local/bin and chmod both perhaps to 755 depending on who needs access to execute... blueagent5 and bt-pan. Many thanks and homage to their respective authors. Gosh this kind of thing saves so much time and misery. Now, we need a bridge. Add the following to the end of /etc/network/interfaces auto pan0 iface pan0 inet dhcp bridge_stp off bridge_ports eth0 I rebooted at about this time to make sure all was as it would be normally. sudo reboot Log back in and we issue modprobe bnep hciconfig hci0 lm master,accept ip link set pan0 up If you don't want pin prompt, don't do this next step. To ensure we get a PIN prompt, issue this... hciconfig hci0 sspmode 0 Start PAN using the special magic in the bt-pan script. It doesn't return, so add an ampersand at the end. bt-pan server pan0 & Start the bluetooth security agent with wonderful ease and confidence. Optionally set a pin (it defaults to 0000). blueagent5 --pin 4321 & Okay, one last thing. Forward the network. This will only work if there is no fancy authentication at the router/dhcp, if there is, you may need to look further to solve this issue. sysctl -w net.ipv4.ip_forward=1 iptables -A INPUT -i pan0 -j ACCEPT iptables -A FORWARD -i pan0 -j ACCEPT Once done, you may need to save these iptables settings and reinstate them each time the system boots. Tiptoe over to your tablet or whatever you are trying to connect to the internet. Open Bluetooth in your settings. Pair with 4321 as your pin, and connect to the local network. But you didn't need to tiptoe after all, it all seems quite robust to me. Enjoy!
I recommend a 2 step approach to this. Would love to hear feedback if this is overcomplicating it. 1) Help your users pick the right account passport.authenticate('google', { // Only show accounts that match the hosted domain. hd: 'example.com', // Ensure the user can always select an account when sent to Google. prompt: 'select_account', scope: [ 'https://www.googleapis.com/auth/plus.login', 'https://www.googleapis.com/auth/plus.profile.emails.read' ] })(req, res, next); 2) Validate their profile When a user is sent to accounts.google.com to authenticate, there is a simple hd=example.com query parameter in the URL. You can remove this and authenticate with any account (Passport will successfully verify the Oauth code regardless of the domain of the chosen account), so it should only be considered sugar for the end user and not security for the server. When Passport does resolve the authentication, just check the hosted domain as in aembke's answer: passport.use(new google_strategy({ clientID: ... clientSecret: ... callbackURL: ... }, function(token, tokenSecret, profile, done) { if (profile._json.domain !== 'example.com') { done(new Error("Wrong domain!")); } else { done(null, profile); } }));
you need to have a way to surface your auth to the frontend. lets say you have an api called user/validate the purpose of that api is to return an authenticated flag and whatever else you want like the server auth token or something. you need a method to request that information. I'm assuming you have a way to make requests to api methods already setup. make a function to request this authentication. export const checkAuth = () => { const url = `${api_route}/user/validate`; // this is just pseudo code to give you an idea of how to do it someRequestMethod(url, (resp) => { if (resp.status === 200 && resp.data.isAuthenticated === true) { setCookie(STORAGE_KEY, resp.data.token); } }); } your base app component would look something like this export default class App extends Component { constructor() { super(); checkAuth(); } .... } now your component could do something like this. class MyComponent extends Component { constructor(){ super() this.isAuthenticated = getCookie(STORAGE_KEY); } render() { return( <div> Hello {this.isAuthenticated ? 'friend' : 'stranger'} ! </div> ); } } your getCookie and setCookie methods would be something like this export const setCookie = (name, value, days, path = '/') => { let expires = ''; if (days) { let date = new Date(); date.setTime(date.getTime() + (days * 24 * 60 * 60 * 1000)); expires = `; expires=${date.toUTCString()};`; } document.cookie = `${name}=${value}${expires}; path=${path}`; }; export const getCookie = (cookieName) => { if (document.cookie.length > 0) { let cookieStart = document.cookie.indexOf(cookieName + '='); if (cookieStart !== -1) { cookieStart = cookieStart + cookieName.length + 1; let cookieEnd = document.cookie.indexOf(';', cookieStart); if (cookieEnd === -1) { cookieEnd = document.cookie.length; } return window.unescape(document.cookie.substring(cookieStart, cookieEnd)); } } return ''; }; Now... I would strongly recommend you look at adding something like Redux to handle passing data around via props. This way you can have one storage method that does the getCookie and sets it up right away and everything else will have isAuthenticated as a flag in the props
I modified my code like below and it worked. I referred Swift: How to Make Https Request Using Server SSL Certificate for fixing this issue. class LoginService{ private static var Manager: Alamofire.SessionManager = { // Create the server trust policies let serverTrustPolicies: [String: ServerTrustPolicy] = [ "devportal:8443": .disableEvaluation ] // Create custom manager let configuration = URLSessionConfiguration.default configuration.httpAdditionalHeaders = Alamofire.SessionManager.defaultHTTPHeaders let manager = Alamofire.SessionManager( configuration: URLSessionConfiguration.default, serverTrustPolicyManager: ServerTrustPolicyManager(policies: serverTrustPolicies) ) return manager }() /** Calls the Login Web Service to authenticate the user */ public func login(username:String, password: String){ // Handle Authentication challenge let delegate: Alamofire.SessionDelegate = LoginService.Manager.delegate delegate.sessionDidReceiveChallenge = { session, challenge in var disposition: URLSession.AuthChallengeDisposition = .performDefaultHandling var credential: URLCredential? if challenge.protectionSpace.authenticationMethod == NSURLAuthenticationMethodServerTrust { disposition = URLSession.AuthChallengeDisposition.useCredential credential = URLCredential(trust: challenge.protectionSpace.serverTrust!) } else { if challenge.previousFailureCount > 0 { disposition = .cancelAuthenticationChallenge } else { credential = LoginService.Manager.session.configuration.urlCredentialStorage?.defaultCredential(for: challenge.protectionSpace) if credential != nil { disposition = .useCredential } } } return (disposition, credential) } //Web service Request let parameters = [ "username": "TEST", "password": "PASSWORD", ] let header: HTTPHeaders = ["Accept": "application/json"] LoginService.Manager.request("https://devportal:8443/rest/login", method: .post, parameters: parameters, encoding: JSONEncoding(options: []),headers :header).responseJSON { response in debugPrint(response) if let json = response.result.value { print("JSON: \(json)") } } } } You should also configure your plist as below <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>NSExceptionDomains</key> <dict> <key>devportal</key> <dict> <key>NSTemporaryExceptionMinimumTLSVersion</key> <string>TLSv1.2</string> <key>NSIncludesSubdomains</key> <true/> <key>NSExceptionRequiresForwardSecrecy</key> <false/> <key>NSExceptionAllowsInsecureHTTPLoads</key> <true/> </dict> </dict> <key>NSAllowsArbitraryLoads</key> <false/> </dict> </plist> Do not enter IP or port numbers in your NSExceptiondomains. It won't work. If you are trying to connect to a web server with IP address, map the IP address to a domain by adding a host entry in etc/hosts file in your mac and then use the domain name in NSExceptionDomains IMPORTANT: Do not use this code in production as this puts your users information at risk, by bypassing auth challenge.
First of all, requesting a method of a controller from another controller is EVIL. This will cause many hidden problems in Laravel's life-cycle. Anyway, there are many solutions for doing that. You can select one of these various ways. Case 1) If you want to call based on Classes Way 1) The simple way But you can't add any parameters or authentication with this way. app(\App\Http\Controllers\PrintReportContoller::class)->getPrintReport(); Way 2) Divide the controller logic into services. You can add any parameters and something with this. The best solution for your programming life. You can make Repository instead Service. class PrintReportService { ... public function getPrintReport() { return ... } } class PrintReportController extends Controller { ... public function getPrintReport() { return (new PrintReportService)->getPrintReport(); } } class SubmitPerformanceController { ... public function getSomethingProxy() { ... $a = (new PrintReportService)->getPrintReport(); ... return ... } } Case 2) If you want to call based on Routes Way 1) Use MakesHttpRequests trait that used in Application Unit Testing. I recommend this if you have special reason for making this proxy, you can use any parameters and custom headers. Also this will be an internal request in laravel. (Fake HTTP Request) You can see more details for the call method in here. class SubmitPerformanceController extends \App\Http\Controllers\Controller { use \Illuminate\Foundation\Testing\Concerns\MakesHttpRequests; protected $baseUrl = null; protected $app = null; function __construct() { // Require if you want to use MakesHttpRequests $this->baseUrl = request()->getSchemeAndHttpHost(); $this->app = app(); } public function getSomethingProxy() { ... $a = $this->call('GET', '/printer/report')->getContent(); ... return ... } } However this is not a 'good' solution, too. Way 2) Use guzzlehttp client This is the most terrible solution I think. You can use any parameters and custom headers, too. But this would be making an external extra http request. So HTTP Webserver must be running. $client = new Client([ 'base_uri' => request()->getSchemeAndhttpHost(), 'headers' => request()->header() ]); $a = $client->get('/performance/submit')->getBody()->getContents()
If you have checked "Lambda Proxy Integration" in your Method Integration Request on API Gateway, you should receive the stage from API Gateway, as well as any stageVariable you have configured. Here's an example of an event object from a Lambda function invoked by API Gateway configured with "Lambda Proxy Integration": { "resource": "/resourceName", "path": "/resourceName", "httpMethod": "POST", "headers": { "header1": "value1", "header2": "value2" }, "queryStringParameters": null, "pathParameters": null, "stageVariables": null, "requestContext": { "accountId": "123", "resourceId": "abc", "stage": "dev", "requestId": "456", "identity": { "cognitoIdentityPoolId": null, "accountId": null, "cognitoIdentityId": null, "caller": null, "apiKey": null, "sourceIp": "1.1.1.1", "accessKey": null, "cognitoAuthenticationType": null, "cognitoAuthenticationProvider": null, "userArn": null, "userAgent": "agent", "user": null }, "resourcePath": "/resourceName", "httpMethod": "POST", "apiId": "abc123" }, "body": "body here", "isBase64Encoded": false }
The explanation of the difference between OpenID, OAuth, OpenID Connect: OpenID is a protocol for authentication while OAuth is for authorization. Authentication is about making sure that the guy you are talking to is indeed who he claims to be. Authorization is about deciding what that guy should be allowed to do. In OpenID, authentication is delegated: server A wants to authenticate user U, but U's credentials (e.g. U's name and password) are sent to another server, B, that A trusts (at least, trusts for authenticating users). Indeed, server B makes sure that U is indeed U, and then tells to A: "ok, that's the genuine U". In OAuth, authorization is delegated: entity A obtains from entity B an "access right" which A can show to server S to be granted access; B can thus deliver temporary, specific access keys to A without giving them too much power. You can imagine an OAuth server as the key master in a big hotel; he gives to employees keys which open the doors of the rooms that they are supposed to enter, but each key is limited (it does not give access to all rooms); furthermore, the keys self-destruct after a few hours. To some extent, authorization can be abused into some pseudo-authentication, on the basis that if entity A obtains from B an access key through OAuth, and shows it to server S, then server S may infer that B authenticated A before granting the access key. So some people use OAuth where they should be using OpenID. This schema may or may not be enlightening; but I think this pseudo-authentication is more confusing than anything. OpenID Connect does just that: it abuses OAuth into an authentication protocol. In the hotel analogy: if I encounter a purported employee and that person shows me that he has a key which opens my room, then I suppose that this is a true employee, on the basis that the key master would not have given him a key which opens my room if he was not. (source) How is OpenID Connect different than OpenID 2.0? OpenID Connect performs many of the same tasks as OpenID 2.0, but does so in a way that is API-friendly, and usable by native and mobile applications. OpenID Connect defines optional mechanisms for robust signing and encryption. Whereas integration of OAuth 1.0a and OpenID 2.0 required an extension, in OpenID Connect, OAuth 2.0 capabilities are integrated with the protocol itself. (source) OpenID connect will give you an access token plus an id token. The id token is a JWT and contains information about the authenticated user. It is signed by the identity provider and can be read and verified without accessing the identity provider. In addition, OpenID connect standardizes quite a couple things that oauth2 leaves up to choice. for instance scopes, endpoint discovery, and dynamic registration of clients. This makes it easier to write code that lets the user choose between multiple identity providers. (source) Google's OAuth 2.0 Google's OAuth 2.0 APIs can be used for both authentication and authorization. This document describes our OAuth 2.0 implementation for authentication, which conforms to the OpenID Connect specification, and is OpenID Certified. The documentation found in Using OAuth 2.0 to Access Google APIs also applies to this service. If you want to explore this protocol interactively, we recommend the Google OAuth 2.0 Playground. (source)
The problem with authentication is that it may be simple to write a few lines of code that accomplish the end goal of authenticating your users, but it's really complex to do so in a way that you won't regret it later; aka my application got owned. One of the steps to take to try to prevent that from happening is don't try to reinvent the wheel and stick to using current standards. However, even with standards, you need to implement them accordingly and that is why you probably see recommendations like the one you mentioned. I would actually make the same type of recommendation my self, delegate as much as you can to third-party libraries like IdentityServer4 or cloud services like Auth0. (you should know I work at Auth0, so you can consider me biased; however, for a non-biased recommendation you can check ThoughtWorks Technology Radar). Also, if you store tokens in cookies, although the storage and transmission of the token happen differently this is still a token based-authentication system; for more on the possible implications of choosing a client-side token storage approach check where to save a JWT in a browser-based application. In relation to CORS, you did not make that explicit, so I thought it may be worthwhile to mention it. You only need to actually worry about CORS if you deploy your front-end and back-end in separate domains because even if development happens in isolation if they share a domain CORS is one less thing you need to worry about. Conclusion For a front-end browser-based application talking with a REST API, the most common approach is to go with token-based authentication that relies on OAuth 2.0 protocol to perform the actual issuance of the tokens. Additionally, you should aim to delegate token issuance to a third-party (IdentityServer4 or other) so that you don't need to implement or maintain that part of the system and only need to consume/validate the generated tokens.
I think it is important to revisit the different steps of authentication, and hopefully through the discussion you will be able to solve the issue you are having. When a client is trying to get an access token to a resource, it needs to specify to AAD which resource it wants to get a token for. A client may be configured to call multiple resources, all with different configurations, so it is an expectation that the resource is always specified in an Access Token Request. The resource can either be an App ID GUID for the Resource, or a valid App ID URI which is registered on the Resource. AAD should be able to uniquely identify which resource you are trying to reach based on the value you provide. However, note that if you use an App ID GUID, you will get a token from AAD where the Audience claim is the App ID GUID. Alternatively, if you use an App ID URI, you will see that URI as the audience claim in the token. In both situations, you will get a token for the 'same' resource, but the claim in the token will appear differently. Additionally, it may be possible that a single application resource may have multiple App ID URIs registered on their app. Depending on which one you use in the authentication request, you will get a different audience claim in the token which matches the resource parameter you passed in. Finally, once you get the token, you send it over to the Resource API who will validate the token for a number of things, such as: the Client ID Claim, the Scopes/Roles Claims, the authentication method ('acr' claim), and definitely that the audience claim matches what they expect! This means that the Resource API ultimately needs to say "I accept < App ID GUID > as a valid Audience Claim"... or "I accept < App ID URI > as a valid Audience Claim". This kind of logic may be built into the library you are using (like OWIN), but you need to make sure that on your API side, you have it configured correctly for the Audiences you expect. You could, if you wanted, make it so that your API does not check the Audience claim at all! All the claims in the token are plaintext, and thus you could really do whatever you want, but you would not have a very secure API in that situation :] End of the day, my hunch is that this error is coming from your own API, and it is happening because you have not configured your app to accept an Audience claim which matches your Resource's App ID GUID (which it looks like what you are passing when you are getting a token based on your code sample). I hope this solves your issue!
The BCrypt family of function are classified as Cryptographic Primitives, while the NCrypt family of functions are classified as Key Storage and Retrieval. The primary difference is that the BCrypt functions are used when dealing only with ephemeral keys, while the NCrypt functions are used when persistent keys are required. In practice, the BCrypt functions are typically used for hashing and symmetric encryption, while the NCrypt functions are used for public/private key encryption and decryption, public/private key signing and verification, and shared secret (e.g. DH and ECDH) negotiation. While some public/private key operations can be done with BCrypt functions, they can only be used with ephemeral keys and are therefore of limited use. Persistent keys are stored in key containers specific to each user (or to the system). This is a security measure to ensure that users can't view each other's private keys. In general, you'll want to use the following functions for the following operations: BCryptHashData: Used for hashing and HMAC (MD5, SHA1, SHA256, SHA384, SHA512) Related: BCryptCreateHash, BCryptFinishHash, BCryptDestroyHash BCryptEncrypt: Symmetric key encryption (DES, 3DES, AES). Related: BCryptGenerateSymmetricKey, BCryptDestroyKey BCryptDecrypt: Symmetric key decryption (DES, 3DES, AES). Related: BCryptGenerateSymmetricKey, BCryptDestroyKey NCryptEncrypt: Asymmetric key encryption (RSA) NCryptDecrypt: Asymmetric key decryption (RSA) NCryptSignHash: Asymetric key signature (RSA, DSA, ECDSA) NCryptVerifySignature: Asymmetric key signature verification (RSA, DSA, ECDSA) NCryptSecretAgreement: Asymmetric key secret sharing (DH, ECDH) Related: NCryptDeriveKey Examples are available at MSDN for several of these cases. For a real world example, I've implemented all of these in the UFTP source code, specifically the encrypt_cng.c file (there are typedefs in place, defined in encryption.h, to allow the functions in this file to implement a common application level API to allow the use of other crypto libraries such as CryptoAPI and OpenSSL).
Here is another example, with custom claim types as well: Login: var claims = new List<Claim> { new Claim(ClaimTypes.Name, user.Name, ClaimValueTypes.String), new Claim(ClaimTypes.Email, user.Email ?? string.Empty, ClaimValueTypes.Email), new Claim(ClaimTypes.PrimarySid, user.Id.ToString(), ClaimValueTypes.Integer), new Claim(CustomClaimTypes.SalesId, user.SalesId.ToString(), ClaimValueTypes.Integer) }; var claimsIdentity = new ClaimsIdentity(claims, DefaultAuthenticationTypes.ApplicationCookie); AuthenticationManager.SignIn(claimsIdentity); Custom claims: public static class CustomClaimTypes { public const string SalesId = "SalesId"; } Extension methods: public static class IdentityExtensions { public static int GetSalesId(this IIdentity identity) { ClaimsIdentity claimsIdentity = identity as ClaimsIdentity; Claim claim = claimsIdentity?.FindFirst(CustomClaimTypes.SalesId); if (claim == null) return 0; return int.Parse(claim.Value); } public static string GetName(this IIdentity identity) { ClaimsIdentity claimsIdentity = identity as ClaimsIdentity; Claim claim = claimsIdentity?.FindFirst(ClaimTypes.Name); return claim?.Value ?? string.Empty; } } Can then be accessed like this: User.Identity.GetSalesId(); User.Identity.GetName();
/** * Example of retrieving the products list using Admin account via Magento REST API. OAuth authorization is used * Preconditions: * 1. Install php oauth extension * 2. If you were authorized as a Customer before this step, clear browser cookies for 'yourhost' * 3. Create at least one product in Magento * 4. Configure resource permissions for Admin REST user for retrieving all product data for Admin * 5. Create a Consumer */ // $callbackUrl is a path to your file with OAuth authentication example for the Admin user $baseUrl = 'http://yourhost.abc'; $scriptName = $_SERVER['SCRIPT_NAME']; $callbackUrl = 'http://scripthost.xyz' . $scriptName; $temporaryCredentialsRequestUrl = $baseUrl."/oauth/initiate?oauth_callback=" . urlencode($callbackUrl); $adminAuthorizationUrl = $baseUrl.'/admin/oauth_authorize'; $customerAuthorizationUrl = $baseUrl.'/oauth/authorize'; $accessTokenRequestUrl = $baseUrl.'/oauth/token'; $apiUrl = $baseUrl.'/api/rest'; $consumerKey = 'Your API consumer key'; $consumerSecret = 'Your API consumer key'; session_start(); if (!isset($_GET['oauth_token']) && isset($_SESSION['state']) && $_SESSION['state'] == 1) { $_SESSION['state'] = 0; } try { $authType = ($_SESSION['state'] == 2) ? OAUTH_AUTH_TYPE_AUTHORIZATION : OAUTH_AUTH_TYPE_URI; $oauthClient = new OAuth($consumerKey, $consumerSecret, OAUTH_SIG_METHOD_HMACSHA1, $authType); $oauthClient->enableDebug(); if (!isset($_GET['oauth_token']) && !$_SESSION['state']) { $requestToken = $oauthClient->getRequestToken($temporaryCredentialsRequestUrl); $_SESSION['secret'] = $requestToken['oauth_token_secret']; $_SESSION['state'] = 1; header('Location: ' . $customerAuthorizationUrl . '?oauth_token=' . $requestToken['oauth_token']); exit; } else if ($_SESSION['state'] == 1) { $oauthClient->setToken($_GET['oauth_token'], $_SESSION['secret']); $accessToken = $oauthClient->getAccessToken($accessTokenRequestUrl); $_SESSION['state'] = 2; $_SESSION['token'] = $accessToken['oauth_token']; $_SESSION['secret'] = $accessToken['oauth_token_secret']; header('Location: ' . $callbackUrl); exit; } else { $oauthClient->setToken($_SESSION['token'], $_SESSION['secret']); $resourceUrl = "$apiUrl/products"; $oauthClient->fetch($resourceUrl, array(), 'GET', array('Content-Type' => 'application/json')); $productsList = json_decode(json_encode($oauthClient->getLastResponse()), FALSE); echo $productsList; } } catch (OAuthException $e) { print_r($e->getMessage()); echo "<br/>"; print_r($e->lastResponse); } ?> /** Also Callback URL is same as your PHP calling script. In other words you have to redirect to this script. Also your callback script and your server, both should on live server or on local server. */
Here is the sample code i have started from documentation but it is returning no changes in the rates. I want to include DRY ICE charges ini_set("soap.wsdl_cache_enabled", "0"); $client = new SoapClient($path_to_wsdl, array('trace' => 1)); // Refer to http://us3.php.net/manual/en/ref.soap.php for more information $request['WebAuthenticationDetail'] = array( 'ParentCredential' => array( 'Key' => getProperty('parentkey'), 'Password' => getProperty('parentpassword') ), 'UserCredential' => array( 'Key' => getProperty('key'), 'Password' => getProperty('password') ) ); $request['ClientDetail'] = array( 'AccountNumber' => getProperty('shipaccount'), 'MeterNumber' => getProperty('meter') ); $request['TransactionDetail'] = array('CustomerTransactionId' => ' *** Rate Request using PHP ***'); $request['Version'] = array( 'ServiceId' => 'crs', 'Major' => '20', 'Intermediate' => '0', 'Minor' => '0' ); $request['ReturnTransitAndCommit'] = true; $request['RequestedShipment']['DropoffType'] = 'REGULAR_PICKUP'; // valid values REGULAR_PICKUP, REQUEST_COURIER, ... $request['RequestedShipment']['ShipTimestamp'] = date('c'); $request['RequestedShipment']['SpecialServicesRequested']['ShipmentSpecialServiceType']= "DRY_ICE"; $request['RequestedShipment']['specialServicesRequested']['shipmentDryIceDetail']['packageCount']= 5; $request['RequestedShipment']['specialServicesRequested']['shipmentDryIceDetail']['totalweight']= 50; $request['RequestedShipment']['specialServicesRequested']['ShipmentSpecialServiceType'] = 'DRY_ICE'; $request['RequestedShipment']['ServiceType'] = 'INTERNATIONAL_PRIORITY'; // valid values STANDARD_OVERNIGHT, PRIORITY_OVERNIGHT, FEDEX_GROUND, ... $request['RequestedShipment']['PackagingType'] = 'YOUR_PACKAGING'; // valid values FEDEX_BOX, FEDEX_PAK, FEDEX_TUBE, YOUR_PACKAGING, ... $request['RequestedShipment']['TotalInsuredValue']=array( 'Ammount'=>100, 'Currency'=>'USD' ); $request['RequestedShipment']['Shipper'] = addShipper(); $request['RequestedShipment']['Recipient'] = addRecipient(); $request['RequestedShipment']['ShippingChargesPayment'] = addShippingChargesPayment(); $request['RequestedShipment']['PackageCount'] = '1'; $request['RequestedShipment']['RequestedPackageLineItems'] = addPackageLineItem1();
There is no neeed to login remotely to run an SQL query. You can use the below function and pass the variables as required. Whichever account have access you can pass as credentials. (Works for both Windows and SQL Authentication) $SQLInstance = "Instance Name" $Database = "Database" $ID = "User ID" $Password = "Password" function Invoke-Sqlcommand { [CmdletBinding()] param( [Parameter(Position=0, Mandatory=$true)] [string]$ServerInstance, [Parameter(Position=1, Mandatory=$false)] [string]$Database, [Parameter(Position=2, Mandatory=$false)] [string]$Query, [Parameter(Position=3, Mandatory=$false)] [string]$Username, [Parameter(Position=4, Mandatory=$false)] [string]$Password, [Parameter(Position=5, Mandatory=$false)] [Int32]$QueryTimeout=600, [Parameter(Position=6, Mandatory=$false)] [Int32]$ConnectionTimeout=15, [Parameter(Position=7, Mandatory=$false)] [ValidateScript({test-path $_})] [string]$InputFile, [Parameter(Position=8, Mandatory=$false)] [ValidateSet("DataSet", "DataTable", "DataRow")] [string]$As="DataRow" ) if ($InputFile) { $filePath = $(resolve-path $InputFile).path $Query = [System.IO.File]::ReadAllText("$filePath") } $conn=new-object System.Data.SqlClient.SQLConnection if ($Username) { $ConnectionString = "Server={0};Database={1};User ID={2};Password={3};Trusted_Connection=False;Connect Timeout={4}" -f $ServerInstance,$Database,$Username,$Password,$ConnectionTimeout } else { $ConnectionString = "Server={0};Database={1};Integrated Security=True;Connect Timeout={2}" -f $ServerInstance,$Database,$ConnectionTimeout } $conn.ConnectionString=$ConnectionString #Following EventHandler is used for PRINT and RAISERROR T-SQL statements. Executed when -Verbose parameter specified by caller if ($PSBoundParameters.Verbose) { $conn.FireInfoMessageEventOnUserErrors=$true $handler = [System.Data.SqlClient.SqlInfoMessageEventHandler] {Write-Verbose "$($_)"} $conn.add_InfoMessage($handler) } $conn.Open() $cmd=new-object system.Data.SqlClient.SqlCommand($Query,$conn) $cmd.CommandTimeout=$QueryTimeout $ds=New-Object system.Data.DataSet $da=New-Object system.Data.SqlClient.SqlDataAdapter($cmd) [void]$da.fill($ds) $conn.Close() switch ($As) { 'DataSet' { Write-Output ($ds) } 'DataTable' { Write-Output ($ds.Tables) } 'DataRow' { Write-Output ($ds.Tables[0]) } } } Invoke-Sqlcommand -ServerInstance $SQLInstance -Database $Database -Query "Query Goes here" -Username $ID -Password $Password Hope it HElps.
I think you are looking for this. The first <system.web> node is the default node in every Web.Config. In there you deny anonymous users access to the site with <deny users="?"/>. But this will also block users to the register page and send the browser in a redirect loop. But you can add a <location> node to the Web.Config and allow anonymous users access to the registration page with <allow users="?"/>. <?xml version="1.0"?> <configuration> <system.web> <globalization uiCulture="nl" culture="nl-NL" /> <compilation targetFramework="4.5.1" /> <httpRuntime targetFramework="4.5" executionTimeout="120" maxRequestLength="1024000" /> <authentication mode="Forms"> <forms cookieless="UseCookies" timeout="43200" loginUrl="/Register.aspx" defaultUrl="/Default.aspx" /> </authentication> <authorization> <deny users="?"/> </authorization> </system.web> <location path="Register.aspx"> <system.web> <authorization> <allow users="?"/> </authorization> </system.web> </location> </configuration>
Using session you can do like this In your login.php before html code put this <?php session_start(); ?> This will start the session. When user submits the form, in nextpage.php, first you need to put again session_start(); on top of the script, after doing users authentication, you can set session like this if(mysqli_num_rows($result) == 0){ header("Location: login.php?error=User doesnot exists"); //take user back to login.php if user doesn't exist }else{ //do this if user exists //get Parameters for studentNo, userName, password $_SESSION['user_id'] = '<userId>'; //.. header("Location: <secure_page>"); } In the secure_page where you've redirected the user, in that php script again on top of the script put session_start(); and after that you can check session if (!isset($_SESSION['user_id']) || $_SESSION['user_id'] != "") { header("Location: login.php?error=access denied"); } This will make sure that un-authenticated users can't access this page. About the issue of not showing alert In you're script, where you're using javascript code, there you'll need to add the type. echo '<script type="text/javascript" language="javascript">'; Another problem with the code is javascript will execute only after you're page is loaded but after echoing all those you're using header("location:login.php"); which will redirect the user without showing javascript alert
If authentication succeeds, you should save some info about user in the session, and then you can check if the id of the user exists or not. In your AuthController.js or whatever controller you are using, usr = req.user.auth_data; //check if the user already exists with that id. if ( req.user.user_data.id ) { UserInfo.findOne ( {user_id:req.user.user_data.id} ).exec(function (err,userDetails){ if (err) { return res.send ( {error:err} ); } else if (!req.user.user_data.email) { return res.view ('form/email'); } else if (!userDetails) { // we have phone number already available return res.redirect('/profile'); } else { return res.redirect('/dashboard'); } }); } else { //its a new user show user a form for entrering the minimal necessary details. return res.view('form/emailphone',{ userDetails : req.user.user_data }); } Also create a policy which checks for pages allowed only after logging in, so if a user's session expires in between, it can be redirected to the home page.
session object is related to the presentation (View) layer, so It is NOT a best practice to pass the session object to the service layer, rather you need to handle that in your controller itself. If you handle session or view related objects in the service layer, you will end up in tightly coupling your service layer (business logic) with the presentation layer, which will be a problem because the whole objective of service layer is to loosely couple with different endpoints (like controller, different web service, etc..). So you can add userdetails into Httpsession in controller layer itself like below: @Controller public class UserDetailsController { public R method1(HttpSession session, UserDetailsBean bean) { session.addAttribute("USERDETAILS", bean); //you can use UserDetailsBean object anywhere until you remove from session } } If you are using spring-security, then you can easily get the userdetails in your controller like below: UserDetails userDetails = SecurityContextHolder.getContext().getAuthentication().getDetails(); You can look for API here
I'm assuming you're using the Mobile SDK for Android and you have everything set up. First, you will want to connect to the user pool: CognitoUserPool userPool = new CognitoUserPool( context, userPoolId, clientId, clientSecret); Then, pick the user you want to authenticate: CognitoUser user = userPool.getUser(userId); Then, write the authentication handler. Cognito will call into your code when (if) it needs a username and a password, rather than you calling it. AuthenticationHandler handler = new AuthenticationHandler { @Override public void onSuccess(CognitoUserSession userSession) { // Authentication was successful, the "userSession" will have the current valid tokens } @Override public void getAuthenticationDetails(final AuthenticationContinuation continuation, final String userID) { // User authentication details, userId and password are required to continue. // Use the "continuation" object to pass the user authentication details // After the user authentication details are available, wrap them in an AuthenticationDetails class // Along with userId and password, parameters for user pools for Lambda can be passed here // The validation parameters "validationParameters" are passed in as a Map<String, String> AuthenticationDetails authDetails = new AuthenticationDetails(userId, password, validationParameters); // Now allow the authentication to continue continuation.setAuthenticationDetails(authDetails); continuation.continueTask(); } /* Handle 2FA, challenges, etc as needed */ }; Finally, try to get a new session and give your handler. user.getSession(handler); If all goes well, you should now have a session with valid tokens. This example is based on the developer guide which also has examples for registering new users, signing out, and so on.
This is my simple explanation on how this 2 works together Always keep in mind that Angular works only in front-end. Its domain is the look and feel, application events, sending data to server and anything else that has something to do with displaying data is coded in this area. Backend services in the other hand interacts with your database, creating business logic, handling authentications, saving / sending of data and other stuff that interacts with the database is coded from here. Now how these two interact is done by the frontend service to send HTTP requests to the Server which is the backend service. This is done by using Angulars $http service or the so called jQuery AJAX or the infamous XMLHttpRequest JavaScript native. New technologies today utilizes Web Sockets which is being used by Firebase and some other frameworks, Web Sockets offers a faster way sending / fetching data from server. The server then interprets the data being sent and send appropriate response. For example getting user list, saving profile, getting reports, logging in, etc.. It would work in this workflow. 1) Angular sends http request to server to get list of users. 2) Backend service installed in the server then interprets the data being sent. 3) Backend service then gets list of users from the database. 4) Backend then sends the data back to the frontend service. 5) Frontend then recieves server response and displays the data to the view. Also these two is coded separately. To have more detailed explations research about how frontend and backend services interact you can find so much resouces in Google.
import java.io.File; import java.io.IOException; import java.util.Properties; import javax.activation.DataHandler; import javax.activation.DataSource; import javax.activation.FileDataSource; import javax.mail.BodyPart; import javax.mail.Message; import javax.mail.MessagingException; import javax.mail.Multipart; import javax.mail.PasswordAuthentication; import javax.mail.Session; import javax.mail.Transport; import javax.mail.internet.AddressException; import javax.mail.internet.InternetAddress; import javax.mail.internet.MimeBodyPart; import javax.mail.internet.MimeMessage; import javax.mail.internet.MimeMultipart; import org.testng.annotations.Test; public class SendAttachment{ @Test public static void sendmail() throws AddressException, MessagingException, InterruptedException{ Thread.sleep(50000); System.out.println("Test mail"); String[] to={"mail address","mail address"}; String to2="mail address";//change accordingly final String user="mail address";//change accordingly final String password="password";//change accordingly //1) get the session object Properties properties = System.getProperties(); properties.setProperty("mail.smtp.host", "smtp.gmail.com"); properties.put("mail.smtp.port", "587"); //TLS Port properties.put("mail.smtp.auth", "true"); //enable authentication properties.put("mail.smtp.starttls.enable", "true"); Session session = Session.getDefaultInstance(properties, new javax.mail.Authenticator() { protected PasswordAuthentication getPasswordAuthentication() { return new PasswordAuthentication(user,password); } }); //2) compose message MimeMessage message = new MimeMessage(session); message.setFrom(new InternetAddress(user)); message.setRecipients(Message.RecipientType.TO, InternetAddress.parse("mailid1,mailid2")); message.setSubject("ECM Regression Test suite Results"); //3) create MimeBodyPart object and set your message text BodyPart messageBodyPart1 = new MimeBodyPart(); messageBodyPart1.setText("Please find the Regression Result in the attachment"); //4) create new MimeBodyPart object and set DataHandler object to this object MimeBodyPart messageBodyPart2 = new MimeBodyPart(); MimeBodyPart messageBodyPart3 = new MimeBodyPart(); MimeBodyPart messageBodyPart4 = new MimeBodyPart(); MimeBodyPart messageBodyPart5 = new MimeBodyPart(); File f3=new File("D:\\svn\\CI_1.0\\seleniumScriptsRegression\\seleniumScriptsRegression\\test-output\\emailable-report.html"); DataSource source4 = new FileDataSource(f3); messageBodyPart5.setDataHandler(new DataHandler(source4)); messageBodyPart5.setFileName(f3.getName()); //5) create Multipart object and add MimeBodyPart objects to this object Multipart multipart = new MimeMultipart(); multipart.addBodyPart(messageBodyPart1); multipart.addBodyPart(messageBodyPart5); //6) set the multiplart object to the message object message.setContent(multipart); //7) send message Transport.send(message); System.out.println("message sent...."); } //} } Try to use like this its working for me. But When we run this old email able report only getting attached in the mail.
API Manager uses org.wso2.carbon.apimgt.gateway.handlers.security.APIAuthenticationHandler to authenticate requests to the gateway using OAuth authentication tokens. To change this behavior, you have three options: Delete the authentication handler from your API definition on the gateway (or from the velocity template, to apply to all API publishing). Create your own authentication handler and replace the default authentication handler in the API definitions and/or velocity template. See: https://docs.wso2.com/display/AM200/Writing+Custom+Handlers Create a new handler that takes an authorization query string parameter and adds the value to the headers of the incoming request. Add this handler before the authentication handler in the handler workflow for your API. That being said, why do you want to do this? There are a number of GUIs available that make sending HTTP requests just as straight-forward as using a browser (https://www.getpostman.com/) so unless you have a very good reason to change this behavior, you probably should not.
You can use the DirectLine Rest API (see docs). The Direct Line API is a simple REST API for connecting directly to a single bot. This API is intended for developers writing their own client applications, web chat controls, mobile apps, or service-to-service applications that will talk to their bot. Within the Direct Line API, you will find: An authentication mechanism using standard secret/token patterns The ability to send messages from your client to your bot via an HTTP POST message The ability to receive messages by polling HTTP GET A stable schema, even if your bot changes its protocol version You need to enable the DirectLine channel for your bot on (see screenshot) You don't have to access each specific channel endpoint separately, you can do it all (with some limitations) through the DirectLine API. Start a New Conversation POST /api/conversations Get Messages in a Conversation GET /api/conversations/{conversationId}/messages Send a Message POST /api/conversations/{conversationId}/messages The full details are in the docs as linked above. Hope this helps
You should pass the size of the destination buffer to function InterCalc(). As written, it can only read sizeof(char*) - 1 bytes at a time. You should also check for end of file. int InterCalc(char *my_string, size_t size) { if (fgets(my_string, size, stdin) == NULL || strcmp(my_string, "exit\n") == 0) { printf("Program ended\n"); return 0; } else if (isValidExpression(my_string) == 0) { printf("Expression error\n"); return 0; } else { return 1; } } Invoke from main(): #include <stdio.h> #include "evalexpression.h" int main(void) { char string[100]; int result; result = InterCalc(string, sizeof(string)); CalcFilter(result, string); return 0; } Notes: you should use the <stdio.h> syntax for standard headers. you should prevent buffer overflow by passing the maximum number of characters for %s formats in sscanf(): sscanf(str, "%f %9s %f", &f1, ops, &f2); EDIT: There is another problem in GetExrValue(): you switch on values from 0 to 5 for op instead of the operation character. Here is a way to correct this: float getExprValue(void) { switch (getOperator()) { case '+': return getFstOperand() + getSecOperand(); case '-': return getFstOperand() - getSecOperand(); case '/': return getFstOperand() / getSecOperand(); case '*': return getFstOperand() * getSecOperand(); case '^': return pow(getFstOperand(), getSecOperand()); default: return 0; } }
I don't see that you mention that so: in a B2B collaboration you've to invite user from other tenant first. The steps are like that: invite and authorize a set of external users by uploading a comma-separated values - CSV file Invitation will be send to external users. The invited user will either sign in to an existing work account with Microsoft (managed in Azure AD), or get a new work account in Azure AD. After signed in, user will be redirected to the app that was shared with them That works perfectly in my case. Regarding some problems which I've detect: Trailing "/" at the end of the active directory resource - try to remove it as this may cause problems. Bellow you will find some code to get authentication headers: string aadTenant = WebServiceClientConfiguration.Settings.ActiveDirectoryTenant; string clientAppId = WebServiceClientConfiguration.Settings.ClientAppId; string clientKey = WebServiceClientConfiguration.Settings.ClientKey; string aadResource = WebServiceClientConfiguration.Settings.ActiveDirectoryResource; AuthenticationContext authenticationContext = new AuthenticationContext(aadTenant); ClientCredential clientCredential = new ClientCredential(clientAppId, clientKey); UserPasswordCredential upc = new UserPasswordCredential(WebServiceClientConfiguration.Settings.UserName, WebServiceClientConfiguration.Settings.Password); AuthenticationResult authenticationResult = await authenticationContext.AcquireTokenAsync(aadResource, clientAppId, upc); return authenticationResult.CreateAuthorizationHeader(); Applications provisioned in Azure AD are not enabled to use the OAuth2 implicit grant by default. You need to explicitly opt in - more details can be found here: Azure AD OAuth2 implicit grant
I have used the same approach in my own projects. The problem that we have is that the client is not secure. In order to generate / refresh a token, you need to pass secure information to the authorization server. I have done the same as you basically, let the back-end handle the tokens and their temporary storage. You cannot and should not trust the client with important information which lets you generate tokens. In terms of delays, I wouldn't worry about it too much since you're not going to be doing that much extra work, you won't even notice the delays. I have a system like this built and used by hundreds of thousands of users with absolutely no issues. Now, you have said a few things in here which make me wonder what you are doing. OAuth2 is not a user authentication system, it's an application authentication system. You don't pass a user and their password and generate a token for them, you pass a ClientID and ClientSecret and they generate a token for you. Then you have an endpoint which gives you the user details for this user, you pass your userid or username and get the details of that user. A token expired does not mean your user is logged out. Those are two completely different things. How are you going to expire a token for example, when your user wants to log out? You can't, your token will still be valid until it expires after the set amount of time has passed. A token can be used for let's say half an hour, but your user may use the website for 1 hour. So before you hit any API endpoint, you could check ... has this token expired yet? if yes then you can go and refresh it and keep working without having to bother your user with a new login screen. The whole point of an OAuth2 system is to make sure that only authorised clients can access it. A client is not a user, it's an application. You can have a website for example and you only want users of that website to access your API. You can have endpoints like ValidateUser for example, where you take a username and a password and return a yes or no and then you log your user in based on that.
Encrypt the file (e.g. with OpenSSL). Save only the encrypted file to the version control system (you're using a version control system, right ?). Below you'll see an OpenSSL encryption/decryption example. You can automate the process with a scripting to fit better to your workflow. Note that instead of manually typing the encryption password you can also use -pass command line argument (see PASS PHRASE ARGUMENTS in man 1 openssl for the details). $ cat /tmp/script.sql alter user batman identified by darknight; alter user robin identified by wonderboy; $ openssl enc -aes-256-cbc -e -base64 -salt -in /tmp/script.sql -out /tmp/script.enc enter aes-256-cbc encryption password: Verifying - enter aes-256-cbc encryption password: $ rm /tmp/script.sql $ cat /tmp/script.enc U2FsdGVkX1/R71mmmadUXTrFz2G6TW5KQmivuxoE1UbjcRGCgw5bAQLGcGspxubx wXkk8/rrwoPdysiULOgzB3yikuTMf8kFvLUBr0+QxE50Vs3iBaderelVJ9ZN9chv 0zBbISq2M5z47xFA6JIeDg== $ openssl enc -aes-256-cbc -d -base64 -salt -in /tmp/script.enc -out /tmp/script.sql enter aes-256-cbc decryption password: $ cat /tmp/script.sql alter user batman identified by darknight; alter user robin identified by wonderboy; $
... and doesn't allow me to save encrypted data to a byte array when encrypting. I want to know the mechanism beneath in order to save the temporary data. Its not really clear to me what you want to do with a temporary array, so the answers below may not be correct. There are two ways to create a temporary result in an array. First is a serial operation where you encrypt the file into an array, and then write the array to disk. Second is a parallel operation where both the array and encrypted file are created at the same time. You can't use a C++11 std::array because the size of the array is not known at runtime. You can use a std::vector, and a snippet is provided blow. Serial #include <fstream> #include <iostream> #include <string> #include <memory> using namespace std; #include "osrng.h" #include "eax.h" #include "modes.h" #include "blowfish.h" #include "filters.h" #include "files.h" using namespace CryptoPP; int main(int argc, char* argv[]) { SecByteBlock key(Blowfish::DEFAULT_KEYLENGTH), iv(Blowfish::BLOCKSIZE); string ifilename("config.h"), ofilename("config.h.enc"); memset(key, 0x00, key.size()); memset(iv, 0x00, iv.size()); EAX< Blowfish >::Encryption enc; enc.SetKeyWithIV(key, key.size(), iv, sizeof(iv)); ifstream strm(ifilename.c_str(), ios::in | ios::binary); size_t len = strm.seekg(0, std::ios_base::end).tellg(); strm.seekg(0, std::ios_base::beg); cout << "Data size: " << len << ", tag size: " << enc.TagSize() << endl; FileSource fs1(strm, false); len += enc.TagSize(); cout << "Expected encrypted data and tag size: " << len << endl; len += Blowfish::BLOCKSIZE; cout << "Overcommitted encrypted data and tag size: " << len << endl; unique_ptr<byte[]> ptr(new byte[len]); ArraySink as1(ptr.get(), len); fs1.Detach(new AuthenticatedEncryptionFilter(enc, new Redirector(as1))); fs1.PumpAll(); len = as1.TotalPutLength(); cout << "Encrypted data and tag size: " << as1.TotalPutLength() << endl; ArraySource as2(ptr.get(), len, true, new FileSink(ofilename.c_str())); return 0; } The serial example produces: $ ./test.exe Data size: 38129, tag size: 8 Expected encrypted data and tag size: 38137 Overcommitted encrypted data and tag size: 38145 Encrypted data and tag size: 38137 $ ls -l config.* -rw-r--r--. 1 ... 38223 Nov 19 04:40 config.compat -rw-r--r--. 1 ... 38129 Nov 19 04:40 config.h -rw-r--r--. 1 ... 38137 Nov 19 06:03 config.h.enc Parallel #include <fstream> #include <iostream> #include <string> #include <memory> using namespace std; #include "osrng.h" #include "eax.h" #include "modes.h" #include "blowfish.h" #include "filters.h" #include "files.h" #include "channels.h" using namespace CryptoPP; int main(int argc, char* argv[]) { SecByteBlock key(Blowfish::DEFAULT_KEYLENGTH), iv(Blowfish::BLOCKSIZE); string ifilename("config.h"), ofilename("config.h.enc"); memset(key, 0x00, key.size()); memset(iv, 0x00, iv.size()); EAX< Blowfish >::Encryption enc; enc.SetKeyWithIV(key, key.size(), iv, sizeof(iv)); ifstream strm(ifilename.c_str(), ios::in | ios::binary); size_t len = strm.seekg(0, std::ios_base::end).tellg(); strm.seekg(0, std::ios_base::beg); // Overcommit len += enc.TagSize() + Blowfish::BLOCKSIZE; // The one and only source FileSource fs1(strm, false); // The first sink FileSink fs2(ofilename.c_str(), true); // The second sink unique_ptr<byte[]> ptr(new byte[len]); ArraySink as1(ptr.get(), len); // The magic to output to both sinks ChannelSwitch cs; cs.AddDefaultRoute(as1); cs.AddDefaultRoute(fs2); fs1.Detach(new AuthenticatedEncryptionFilter(enc, new Redirector(cs))); fs1.PumpAll(); return 0; } The parallel example produces: $ ./test.exe $ ls -l config.* -rw-r--r--. 1 ... 38223 Nov 19 04:40 config.compat -rw-r--r--. 1 ... 38129 Nov 19 04:40 config.h -rw-r--r--. 1 ... 38137 Nov 19 06:02 config.h.enc std::vector Instead of: unique_ptr<byte[]> ptr(new byte[len]); ArraySink as1(ptr.get(), len); You can use: std::vector<byte> v; ... v.resize(len); ArraySink as(&v[0], v.size()); ... // Perform encryption fs.Detach(new AuthenticatedEncryptionFilter(enc, new Redirector(as))); fs.PumpAll(); // Resize now you know the size of ciphertext and tag v.resize(as.TotalPutLength());
First of all, your fscanf() call returns the number of matches and you are not checking that, it could be returning 0 and not EOF because there was no match and also the end of file wasn't reached in the previous read operation, so you need to check that it returns 3 not EOF. Also, you should try to prevent a buffer overflow. You can use a length modifier for the "%s" specifier and use the length of the target array - 1. Finally, the middle value in every line seems to be an integer so you may use "%4s%d%6s" as the format parameter to fscanf() and remember, check against 3 not EOF. You should read on how to fill a combo box in gtk+. You can use a GtkTreeModel and GtkTreeIters to fill it very easily and the way you would do it would be very clear too. Another benefit is that you would have much more control on what data your combo box items hold, you can use multiple columns, store extra data that you would easily retrieve from the model. This is an example of how to do it, error handling should be improved a lot #include <gtk/gtk.h> typedef enum Columns { Name, Children, Location, ColumnsCount } Columns; static GtkWidget * create_and_populate_combo_box() { GtkListStore *model; GtkWidget *combo; GtkTreeIter iter; GtkCellRenderer *column; char name[4]; int children; char location[6]; FILE *file; // Create a new list store model = gtk_list_store_new(ColumnsCount, G_TYPE_STRING, G_TYPE_INT, G_TYPE_STRING); if (model == NULL) return NULL; // Create the combo box combo = gtk_combo_box_new_with_model(GTK_TREE_MODEL(model)); if (combo == NULL) return NULL; // Create a cell renderer to render text only (this is a basic one) column = gtk_cell_renderer_text_new(); if (column == NULL) return NULL; // Install the cell renderer and set the text column gtk_cell_layout_pack_start(GTK_CELL_LAYOUT(combo), column, TRUE); gtk_cell_layout_set_attributes(GTK_CELL_LAYOUT(combo), column, "text", Name, NULL); // Read data from the file file = fopen("data.dat", "r"); if (file == NULL) return NULL; while (fscanf(file, "%4s%d%6s", name, &children, location) == 3) { gtk_list_store_append(GTK_LIST_STORE(model), &iter); gtk_list_store_set(GTK_LIST_STORE(model), &iter, Name, name, Children, children, Location, location, -1); } fclose(file); // Set the first value as active gtk_combo_box_set_active(GTK_COMBO_BOX(combo), 0); return combo; } int main(int argc, char **argv) { GtkWidget *window; GtkWidget *combo; gtk_init(&argc, &argv); window = gtk_window_new(GTK_WINDOW_TOPLEVEL); if (window == NULL) return -1; g_signal_connect(G_OBJECT(window), "destroy", G_CALLBACK(gtk_main_quit), NULL); combo = create_and_populate_combo_box(); if (combo == NULL) return -1; gtk_widget_set_size_request(window, 100, -1); gtk_container_add(GTK_CONTAINER(window), combo); gtk_widget_show_all(window); gtk_main(); return 0; }
One possibility is to go for a federated identity management system like keycloak. Keycloak offers adapters for spring as well as android has full OAUTH2 support and gives you the possibility to use Facebook as identity provider. It gives you a lot of benefit, as lot of the features you most likely need are already there. On the other hand it is a big topic so be aware that it will take you some time to bring the whole setup alive. You will need to host keycloak, configure clients for Android app and you Web application, introduce keycloak adapter to both Android application and Web application and finally configure Facebook as identity provider. EDIT: Have a look here, this seems promising. In general, I have to admit that I didn't integrate keycloak with facebook or Android myself. We utilize it to secure our spring-boot and Java EE applications. I only was heavily involved in this in integration and by that stumbeld upon the stated functionality. There is also the possibility to do OAUTH2 in Android by hand see. Here is an example how to do facebook integration. I hope this helps and good luck ;)
I have implemented something like this before, using EclipseLink and supporting MySQL (and 6 other DBMS). In that case the application was used within an enterprise, but you could have multiple projects isolating different departments (tenants), and you could define users that could see data from all projects. The key is to introduce a Tenant entity as the top level entity in your ER diagram, every other entity belongs to a tenant. @Entity public class Tenant { @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; @OneToMany(mappedBy = "tenant", cascade = CascadeType.ALL) private Collection<User> users; @OneToMany(mappedBy = "tenant", cascade = CascadeType.ALL) private Collection<TenantConfiguration> configurations; } One challenge you will have to deal with is security. On every request you have to verify if the user has permission to access the requested entities (never trust the front end). This means that you OAuth2 Authentication (created during login) needs to contain a reference to an entity which allows you to determine the users tenancy, and permissions. For performance reasons you typically include the Tenancy (TENANCY_ID) in JPA queries to avoid loading a lot of data and filtering it in memory. Good Luck
I don't know why, but when I commented out the email target and logger, the custom LayoutRenderer started working again. For reference, here is the current nlog.config file, with email stuff commented out. <?xml version="1.0" encoding="utf-8" ?> <nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" autoReload="true" internalLogLevel="Warn" internalLogFile="C:\temp\Logs\internal-nlog.txt"> <targets> <target xsi:type="Null" name="blackhole" /> <target name="database" xsi:type="Database" > <connectionString> Server=localhost;Database=LogDatabase;Trusted_Connection=True;MultipleActiveResultSets=true </connectionString> <commandText> insert into dbo.Log ( UserId, Application, LogDate, Level, Message, Exception ) values ( @User, @Application, @Logged, @Level, @Message, @Exception ); </commandText> <parameter name="@application" layout="WebApplicationNameHere" /> <parameter name="@logged" layout="${date}" /> <parameter name="@level" layout="${level}" /> <parameter name="@message" layout="${message}" /> <parameter name="@user" layout="${aspnet-user-id} " /> <!-- This is a custom attribute, set in NLogHelper.cs--> <parameter name="@exception" layout="${exception:tostring}" /> </target> <!--<target name="email" xsi:type="Mail" subject="System Error" body="Date-Time:${newline}${longdate}${newline}${newline}Machine:${newline}${machinename}${newline}${newline}User:${newline}${aspnet-user-identity} ${newline}${newline}Message:${newline}${message}" to="[email protected]" from="[email protected]" Encoding="UTF-8" smtpUsername="[email protected]" enableSsl="true" smtpPassword="passwordgoeshere" smtpAuthentication="Basic" smtpServer="smtp.mailgun.org" smtpPort="587" /> --> </targets> <rules> <!--Skip Microsoft's verbose logging --> <logger name="Microsoft.*" minlevel="Trace" writeTo="blackhole" final="true" /> <logger name="*" minlevel="Trace" writeTo="database" /> <!--<logger name="*" level="Error" writeTo="email"/>--> </rules> <extensions> <!--enable NLog.Web for ASP.NET Core--> <add assembly="NLog.Web.AspNetCore"/> </extensions> </nlog> If I uncomment the email part, I get the error below, which doesn't say anything about the email section being a problem. The only reason I even tried without the email section was because I realized that I used to get emails when an exception happened, and now I don't, and was trying to fix that. 2016-11-22 12:42:13.7563 Warn Error has been raised. Exception: NLog.NLogConfigurationException: Error when setting property 'Layout' on NLog.Targets.DatabaseParameterInfo ---> System.ArgumentException: LayoutRenderer cannot be found: 'aspnet-user-id' at NLog.Config.Factory`2.CreateInstance(String name) at NLog.Layouts.LayoutParser.ParseLayoutRenderer(ConfigurationItemFactory configurationItemFactory, SimpleStringReader sr) at NLog.Layouts.LayoutParser.CompileLayout(ConfigurationItemFactory configurationItemFactory, SimpleStringReader sr, Boolean isNested, String& text) at NLog.Layouts.SimpleLayout.set_Text(String value) at NLog.Internal.PropertyHelper.TryNLogSpecificConversion(Type propertyType, String value, Object& newValue, ConfigurationItemFactory configurationItemFactory) at NLog.Internal.PropertyHelper.SetPropertyFromString(Object obj, String propertyName, String value, ConfigurationItemFactory configurationItemFactory) --- End of inner exception stack trace --- at NLog.Internal.PropertyHelper.SetPropertyFromString(Object obj, String propertyName, String value, ConfigurationItemFactory configurationItemFactory) at NLog.Config.XmlLoggingConfiguration.ConfigureObjectFromAttributes(Object targetObject, NLogXmlElement element, Boolean ignoreType) at NLog.Config.XmlLoggingConfiguration.AddArrayItemFromElement(Object o, NLogXmlElement element) at NLog.Config.XmlLoggingConfiguration.SetPropertyFromElement(Object o, NLogXmlElement element) at NLog.Config.XmlLoggingConfiguration.ParseTargetElement(Target target, NLogXmlElement targetElement) at NLog.Config.XmlLoggingConfiguration.ParseTargetsElement(NLogXmlElement targetsElement) at NLog.Config.XmlLoggingConfiguration.ParseNLogElement(NLogXmlElement nlogElement, String filePath, Boolean autoReloadDefault) at NLog.Config.XmlLoggingConfiguration.Initialize(XmlReader reader, String fileName, Boolean ignoreErrors)
The most sensible approach would be to use two separate versions of the function - one for version 8/9 and one for 10. That being said, the following approach should work (untested, I don't have access to any instances running Oracle 10 or lower): check the database version (either by checking the existence of DBMS_DB_VERSION or by parsing the output of PRODUCT_COMPONENT_VERSION or V$VERSION which - to the best of my knowledge - already existed in version 8). use dynamic SQL to either call dbms_crypto or return the string unchanged (since your package won't compile on 8/9 if you reference dbms_crypto directly) Example (untested): create or replace function s2_encrypt(paramToEncrypt in VARCHAR2, encrypt8BYTEKey in RAW) return RAW is encryptedReturnValue RAW(2000); objectCount pls_integer; begin select count(*) into objectCount from all_objects where object_name = 'DBMS_CRYPTO'; -- Oracle 8/9: return string unchanged if objectCount = 0 then encryptedReturnValue := paramToEncrypt; else execute immediate ' declare encryptionMode number := DBMS_CRYPTO.ENCRYPT_AES128 + DBMS_CRYPTO.CHAIN_CBC + DBMS_CRYPTO.PAD_PKCS5; begin :encryptedReturnValue := dbms_crypto.encrypt(UTL_I18N.STRING_TO_RAW(:paramToEncrypt, ''AL32UTF8''), encryptionMode, :encrypt8BYTEKey); end;' using out encryptedReturnValue, in paramToEncrypt, in encrypt8BYTEKey; end if; return encryptedReturnValue; end; Usage (11g - 8i apparently did not have UTL_I18N, see comments) select s2_encrypt( 'hello world', UTL_I18N.STRING_TO_RAW ('8232E3F8BDE7703C', 'AL32UTF8')) from dual;
Replacing the laravel authentication with a custom authentication I had built my laravel project and then had a task to replace the larevel default authentication with a custom authentication module I could not find any post that could help me fix this issue and had to refer to many articles . There fore i decided to make a post on how this could be done So as to help any one else facing the similar issue. 1.Files needed to be modified : a) config/auth.php : Replace your eloquent driver with your custom driver return [ /* |-------------------------------------------------------------------------- | Default Authentication Driver |-------------------------------------------------------------------------- | | This option controls the authentication driver that will be utilized. | This driver manages the retrieval and authentication of the users | attempting to get access to protected areas of your application. | | Supported: "database", "eloquent" | */ // 'driver' => 'eloquent', 'driver' => 'custom', /* |-------------------------------------------------------------------------- | Authentication Model |-------------------------------------------------------------------------- | | When using the "Eloquent" authentication driver, we need to know which | Eloquent model should be used to retrieve your users. Of course, it | is often just the "User" model but you may use whatever you like. | */ 'model' => 'App\User', /* |-------------------------------------------------------------------------- | Authentication Table |-------------------------------------------------------------------------- | | When using the "Database" authentication driver, we need to know which | table should be used to retrieve your users. We have chosen a basic | default value but you may easily change it to any table you like. | */ 'table' => 'user', /* |-------------------------------------------------------------------------- | Password Reset Settings |-------------------------------------------------------------------------- | | Here you may set the options for resetting passwords including the view | that is your password reset e-mail. You can also set the name of the | table that maintains all of the reset tokens for your application. | | The expire time is the number of minutes that the reset token should be | considered valid. This security feature keeps tokens short-lived so | they have less time to be guessed. You may change this as needed. | */ 'password' => [ 'email' => 'emails.password', 'table' => 'password_resets', 'expire' => 60, ], ]; b) config/app.php: Add your custom provider to the list of providers 'App\Providers \CustomAuthProvider', 2.Files needed to be added a. providers/CustomAuthProvider.php: Create a new Custom Provider that uses the custom driver that was defined earlier use App\Auth\CustomUserProvider; use Illuminate\Support\ServiceProvider; class CustomAuthProvider extends ServiceProvider { /** * Bootstrap the application services. * * @return void */ public function boot() { $this->app['auth']->extend('custom',function() { return new CustomUserProvider(); }); } /** * Register the application services. * * @return void */ public function register() { // } } b. Auth/CutomerUserProvider.php This class will replace the eloquentUserProvider and where all house keeping procedrues can be initiated (after login / before logout) . namespace App\Auth; use App\UserPoa; use Carbon\Carbon; use Illuminate\Auth\GenericUser; use Illuminate\Contracts\Auth\Authenticatable; use Illuminate\Contracts\Auth\UserProvider; class CustomUserProvider implements UserProvider { /** * Retrieve a user by their unique identifier. * * @param mixed $identifier * @return \Illuminate\Contracts\Auth\Authenticatable|null */ public function retrieveById($identifier) { // TODO: Implement retrieveById() method. $qry = UserPoa::where('admin_id','=',$identifier); if($qry->count() >0) { $user = $qry->select('admin_id', 'username', 'first_name', 'last_name', 'email', 'password')->first(); $attributes = array( 'id' => $user->admin_id, 'username' => $user->username, 'password' => $user->password, 'name' => $user->first_name . ' ' . $user->last_name, ); return $user; } return null; } /** * Retrieve a user by by their unique identifier and "remember me" token. * * @param mixed $identifier * @param string $token * @return \Illuminate\Contracts\Auth\Authenticatable|null */ public function retrieveByToken($identifier, $token) { // TODO: Implement retrieveByToken() method. $qry = UserPoa::where('admin_id','=',$identifier)->where('remember_token','=',$token); if($qry->count() >0) { $user = $qry->select('admin_id', 'username', 'first_name', 'last_name', 'email', 'password')->first(); $attributes = array( 'id' => $user->admin_id, 'username' => $user->username, 'password' => $user->password, 'name' => $user->first_name . ' ' . $user->last_name, ); return $user; } return null; } /** * Update the "remember me" token for the given user in storage. * * @param \Illuminate\Contracts\Auth\Authenticatable $user * @param string $token * @return void */ public function updateRememberToken(Authenticatable $user, $token) { // TODO: Implement updateRememberToken() method. $user->setRememberToken($token); $user->save(); } /** * Retrieve a user by the given credentials. * * @param array $credentials * @return \Illuminate\Contracts\Auth\Authenticatable|null */ public function retrieveByCredentials(array $credentials) { // TODO: Implement retrieveByCredentials() method. $qry = UserPoa::where('username','=',$credentials['username']); if($qry->count() >0) { $user = $qry->select('admin_id','username','first_name','last_name','email','password')->first(); return $user; } return null; } /** * Validate a user against the given credentials. * * @param \Illuminate\Contracts\Auth\Authenticatable $user * @param array $credentials * @return bool */ public function validateCredentials(Authenticatable $user, array $credentials) { // TODO: Implement validateCredentials() method. // we'll assume if a user was retrieved, it's good if($user->username == $credentials['username'] && $user->getAuthPassword() == md5($credentials['password'].\Config::get('constants.SALT'))) { $user->last_login_time = Carbon::now(); $user->save(); return true; } return false; } } UsePoa (This is my model for the admin table): This is a Model class that i created for my admin table .It implements Illuminate\Contracts\Auth\Authenticatable use Illuminate\Auth\Authenticatable; use Illuminate\Database\Eloquent\Model; use Illuminate\Contracts\Auth\Authenticatable as AuthenticatableContract; class UserPoa extends Model implements AuthenticatableContract { use Authenticatable; protected $table = 'admin'; protected $primaryKey = 'admin_id'; public $timestamps = false; } 3.Files need to know about Guard.php This is the class that will call your User Provider depending on what is defined in the driver. Originally it used to be the EloquentUserProvider .But in this case I have replaced it with the CustomUserProvider. Below is how the methods in the CustomUserProvider are called by the Guard. 1 . Login : A. retrieveByCredentials is called to check if the user exists. B.ValidateCredentials is called to verify if the username and password are correct . Note: The object that was produced in the retrieveByCredentials is sent to the ValidateCredentials and therefore no second db access is required. Authenticate a page: Whenever an attempt is made to see if a user has been logged in: retrieveById($identifier) is called. Logout with remember me setup the method updateRememberToken(Authenticatable $user, $token) will be called.
create a table to store the values of failed attempts ex : user_attempts Write custom event listener @Component("authenticationEventListner") public class AuthenticationEventListener implements AuthenticationEventPublisher { @Autowired UserAttemptsServices userAttemptsService; @Autowired UserService userService; private static final int MAX_ATTEMPTS = 3; static final Logger logger = LoggerFactory.getLogger(AuthenticationEventListener.class); @Override public void publishAuthenticationSuccess(Authentication authentication) { logger.info("User has been logged in Successfully :" +authentication.getName()); userAttemptsService.resetFailAttempts(authentication.getName()); } @Override public void publishAuthenticationFailure(AuthenticationException exception, Authentication authentication) { logger.info("User Login failed :" +authentication.getName()); String username = authentication.getName().toString(); UserAttempts userAttempt = userAttemptsService.getUserAttempts(username); User userExists = userService.findBySSO(username); int attempts = 0; String error = ""; String lastAttempted = ""; if (userAttempt == null) { if(userExists !=null ){ userAttemptsService.insertFailAttempts(username); } } else { attempts = userAttempt.getAttempts(); lastAttempted = userAttempt.getLastModified(); userAttemptsService.updateFailAttempts(username, attempts); if (attempts + 1 >= MAX_ATTEMPTS) { error = "User account is locked! <br>Username : " + username+ "<br>Last Attempted on : " + lastAttempted; throw new LockedException(error); } } throw new BadCredentialsException("Invalid User Name and Password"); } } 3.Security Configuration 1) @Autowired @Qualifier("authenticationEventListner") AuthenticationEventListener authenticationEventListner; 2) @Bean public AuthenticationEventPublisher authenticationListener() { return new AuthenticationEventListener(); } 3) @Autowired public void configureGlobalSecurity(AuthenticationManagerBuilder auth) throws Exception { auth.userDetailsService(userDetailsService).passwordEncoder(passwordEncoder()); //configuring custom user details service auth.authenticationProvider(authenticationProvider); // configuring login success and failure event listener auth.authenticationEventPublisher(authenticationEventListner); }