text
stringlengths
64
2.99M
According to https://github.com/spring-projects/spring-security/issues/6742 it seems that the token is intentionally not refreshed: An ID Token typically comes with an expiration date. The RP MAY rely on it to expire the RP session. Spring does not. There are two enhancements mentioned at the end which should solve some of the refresh issues - both are still open. As a workaround, I implemented a GenericFilterBean which checks the token and clears the authentication in the current security context. Thus a new token is needed. @Configuration public class RefreshTokenFilterConfig { @Bean GenericFilterBean refreshTokenFilter(OAuth2AuthorizedClientService clientService) { return new GenericFilterBean() { @Override public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException { Authentication authentication = SecurityContextHolder.getContext().getAuthentication(); if (authentication != null && authentication instanceof OAuth2AuthenticationToken) { OAuth2AuthenticationToken token = (OAuth2AuthenticationToken) authentication; OAuth2AuthorizedClient client = clientService.loadAuthorizedClient( token.getAuthorizedClientRegistrationId(), token.getName()); OAuth2AccessToken accessToken = client.getAccessToken(); if (accessToken.getExpiresAt().isBefore(Instant.now())) { SecurityContextHolder.getContext().setAuthentication(null); } } filterChain.doFilter(servletRequest, servletResponse); } }; } } Additionally I had to add the filter to the security config: @Bean public WebSecurityConfigurerAdapter webSecurityConfigurer(GenericFilterBean refreshTokenFilter) { return new WebSecurityConfigurerAdapter() { @Override protected void configure(HttpSecurity http) throws Exception { http .addFilterBefore(refreshTokenFilter, AnonymousAuthenticationFilter.class) Implemented with spring-boot-starter-parent and dependencies in version 2.2.7.RELEASE: spring-boot-starter-web spring-boot-starter-security spring-boot-starter-oauth2-client I appreciate opinions about this workaround since I'm still not sure if such an overhead is really needed in Spring Boot.
Q: "Does not connect with the specified remote_user: root but with the user 'ansible'." A: Very probably the remote_user is overridden by the variable ansible_user. The variable has a higher precedence. See the last section. For example - set_fact: ansible_user: admin - command: whoami remote_user: root register: result - debug: var: result.stdout give "result.stdout": "admin" Without the variable ansible_user this should work. remote_user is defined on the task level. For example, - command: whoami remote_user: admin register: result - debug: var: result.stdout - command: whoami remote_user: root register: result - debug: var: result.stdout give "result.stdout": "admin" "result.stdout": "root" Debug Put a debug task into the code and see the value of the variable ansible_user. For example - debug: var: ansible_user - name: "Create 'ansible' user" remote_user: root user: name: "ansible" Use ansible_user Use ansible_user if there shouldn't be any chance to override the value. See also parameter remote_user of the SSH connect plugin. remote_user is the parameter in the Ansible configuration. Instead, it is also possible to use variable ansible_user to change the remote user from a playbook, or task. The variable has the highest preference. See the last section. For example - command: whoami register: result vars: ansible_user: admin - debug: var: result.stdout - command: whoami register: result vars: ansible_user: root - debug: var: result.stdout work as expected and give "result.stdout": "admin" "result.stdout": "root" } Best practice is to use public key authentication with the password of the private key provided by ssh-agent. Disable root login But, the best practice is to Disable root login: "Use a normal user account to initiate your connection instead, together with sudo." For example - command: whoami register: result become: true become_method: sudo become_user: root vars: ansible_user: admin - debug: var: result.stdout gives "result.stdout": "root" The remote user admin must be allowed sudo, of course root@test_01> cat /usr/local/etc/sudoers ... admin ALL=(ALL) NOPASSWD: ALL Precedence It's necessary to understand that many configuration parameters can be overridden by variables from play to the task level. In most cases, these variables are created from the name of the parameter by the addition of the prefix ansible_. The variable ansible_user and parameter remote_user is an exception (FWIW, I'm not aware of any other exception). It's also important to keep in mind that the variables got precedence over playbook's keywords. As an example, become directives can also be specified as variables. For example - command: whoami register: result vars: ansible_user: admin ansible_become: true ansible_become_method: sudo ansible_become_user: root - debug: var: result.stdout gives "result.stdout": "root"
A well-written program reports invalid input with a comprehensible error message, not with a crash. Fortunately, it is possible to avoid scanf buffer overflow by either specifying a field width or using the a flag. When you specify a field width, you need to provide a buffer (using malloc or a similar function) of type char *.You need to make sure that the field width you specify does not exceed the number of bytes allocated to your buffer. On the other hand, you do not need to allocate a buffer if you specify the a flag character -- scanf will do it for you. Simply pass scanf an pointer to an unallocated variable of type char *, and scanf will allocate however large a buffer the string requires, and return the result in your argument. This is a GNU-only extension to scanf functionality. Here is a code example that shows first how to safely read a string of fixed maximum length by allocating a buffer and specifying a field width, then how to safely read a string of any length by using the a flag. int main() { int bytes_read; int nbytes = 100; char *string1, *string2; string1 = (char *) malloc (25); puts ("Please enter a string of 20 characters or fewer."); scanf ("%20s", string1); printf ("\nYou typed the following string:\n%s\n\n", string1); puts ("Now enter a string of any length."); scanf ("%as", &string2); printf ("\nYou typed the following string:\n%s\n", string2); return 0; } There are a couple of things to notice about this example program. First, notice that the second argument passed to the first scanf call is string1, not &string1. The scanf function requires pointers as the arguments corresponding to its conversions, but a string variable is already a pointer (of type char *), so you do not need the extra layer of indirection here. However, you do need it for the second call to scanf. We passed it an argument of &string2 rather than string2, because we are using the a flag, which allocates a string variable big enough to contain the characters it read, then returns a pointer to it.
In my opinion the best way to manage firebase authentication in flutter is to use the provider package. Your Auth class is missing one important thing which is the onAuthStateChnaged method. You can create a stream as a getter for the onAuthStateChanged inside an Auth class. The Auth class will extend the ChangeNotifier class. ChangeNotifier class is part of the flutter api. class Auth extends ChangeNotifier { final FirebaseAuth _auth = FirebaseAuth.instance; // create a getter stream Stream<FirebaseUser> get onAuthStateChanged => _auth.onAuthStateChanged; //Sign in async functions here .. } Wrap your MaterialApp with ChangeNotifierProvider (part of provider package) and return an instance of the Auth class in create method like so: class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return ChangeNotifierProvider( create: (context) => Auth(), child: new MaterialApp( home: Landing(), ), ); } } Now create landing page as a stateless widget. Use a Consumer or Provider.of(context) and a stream builder to listen to the auth changes and render the login page or home page as appropriate. class Landing extends StatelessWidget { @override Widget build(BuildContext context) { Auth auth = Provider.of<Auth>(context); return StreamBuilder<FirebaseUser>( stream: auth.onAuthStateChanged, builder: (context, snapshot) { if (snapshot.connectionState == ConnectionState.active) { FirebaseUser user = snapshot.data; if (user == null) { return LogIn(); } return Home(); } else { return Scaffold( body: Center( child: CircularProgressIndicator(), ), ); } }, ); } } You can read more about state management with provider from the official flutter documentation. Follow this link: https://flutter.dev/docs/development/data-and-backend/state-mgmt/simple
i need to add some more details. i am using doc code explain further viewsets.ViewSet class ViewSet(ViewSetMixin, views.APIView): """ The base ViewSet class does not provide any actions by default. """ pass this means ViewSet inherited two class ViewSetMixin(it gives just binding the 'GET' and 'POST' methods to the 'list' and 'create' actions) and views.APIView(this gives authentication_classes, permission_classes, etc... attributes ). so viewsets.ViewSet does not provide any concrete actions methods by default but you have to manualy override list,create,update, etc... methods. viewsets.ModelViewSet class ModelViewSet(mixins.CreateModelMixin, mixins.RetrieveModelMixin, mixins.UpdateModelMixin, mixins.DestroyModelMixin, mixins.ListModelMixin, GenericViewSet): """ A viewset that provides default create(),retrieve(),update(), partial_update(), destroy() and list() actions. """ pass this means ModelViewSet inherited all most all mixins so it provides default list,create,update etc.. action methods and GenericViewSet ( it provides the get_object and get_queryset methods, You'll need to either set these attributes, or override get_queryset()/get_serializer_class() because GenericViewSet inhereted from GenericAPIView,so modelViewSet requires queryset and serializer_class attributes set in ModelViewSet. 3.get_objects() method can we override in (viewsets.ViewSet) class? or get_objects() method only limited to (viewsets.ModelViewSet) class?. **get_object** and **get_queryset** belongs to **GenericViewSet(GenericAPIView)** class, in ModelViewSet this GenericViewSet inherited by default so it works only in **ModelViewSet** and **get_object** method no use in ViewSet. for more info check this article, next time you wont ask an question
The actual problem was while generating the token, property role wasn't included in TokenService from @loopback/authentication. So I created custom-token service implementing this TokenService and added a property role while token-generation. So later loopback-authentication sends this role to loopback-authorization. You can access it in AuthorizationContext.principals[0] Here is the code custom-toekn.service.ts import {TokenService} from '@loopback/authentication'; import {inject} from '@loopback/context'; import {HttpErrors} from '@loopback/rest'; import {securityId, UserProfile} from '@loopback/security'; import {promisify} from 'util'; import {TokenServiceBindings} from '../keys'; const jwt = require('jsonwebtoken'); const signAsync = promisify(jwt.sign); const verifyAsync = promisify(jwt.verify); export class JWTService implements TokenService { constructor( @inject(TokenServiceBindings.TOKEN_SECRET) private jwtSecret: string, @inject(TokenServiceBindings.TOKEN_EXPIRES_IN) private jwtExpiresIn: string, ) {} async verifyToken(token: string): Promise<UserProfile> { if (!token) { throw new HttpErrors.Unauthorized( `Error verifying token : 'token' is null`, ); } let userProfile: UserProfile; try { // decode user profile from token const decodedToken = await verifyAsync(token, this.jwtSecret); // don't copy over token field 'iat' and 'exp', nor 'email' to user profile userProfile = Object.assign( {[securityId]: '', name: ''}, { [securityId]: decodedToken.id, name: decodedToken.name, id: decodedToken.id, role: decodedToken.role, }, ); } catch (error) { throw new HttpErrors.Unauthorized( `Error verifying token : ${error.message}`, ); } return userProfile; } async generateToken(userProfile: UserProfile): Promise<string> { if (!userProfile) { throw new HttpErrors.Unauthorized( 'Error generating token : userProfile is null', ); } const userInfoForToken = { id: userProfile[securityId], name: userProfile.name, role: userProfile.role, }; // Generate a JSON Web Token let token: string; try { token = await signAsync(userInfoForToken, this.jwtSecret, { expiresIn: Number(this.jwtExpiresIn), }); } catch (error) { throw new HttpErrors.Unauthorized(`Error encoding token : ${error}`); } return token; } } keys.ts import {TokenService} from '@loopback/authentication'; export namespace TokenServiceConstants { export const TOKEN_SECRET_VALUE = 'myjwts3cr3t'; export const TOKEN_EXPIRES_IN_VALUE = '600'; } export namespace TokenServiceBindings { export const TOKEN_SECRET = BindingKey.create<string>( 'authentication.jwt.secret', ); export const TOKEN_EXPIRES_IN = BindingKey.create<string>( 'authentication.jwt.expires.in.seconds', ); export const TOKEN_SERVICE = BindingKey.create<TokenService>( 'services.authentication.jwt.tokenservice', ); } Then you have to bind this token-service in application.ts application.ts import {JWTService} from './services/token-service'; import {TokenServiceBindings, TokenServiceConstants} from './keys'; this.bind(TokenServiceBindings.TOKEN_SECRET).to( TokenServiceConstants.TOKEN_SECRET_VALUE, ); this.bind(TokenServiceBindings.TOKEN_EXPIRES_IN).to( TokenServiceConstants.TOKEN_EXPIRES_IN_VALUE, ); this.bind(TokenServiceBindings.TOKEN_SERVICE).toClass(JWTService); controller.ts import {authenticate, TokenService, UserService} from '@loopback/authentication'; import {Credentials, OPERATION_SECURITY_SPEC, TokenServiceBindings, UserServiceBindings} from '@loopback/authentication-jwt'; import {authorize} from '@loopback/authorization'; export class UserController { constructor( @repository(UserRepository) public userRepository: UserRepository, @inject(TokenServiceBindings.TOKEN_SERVICE) public jwtService: TokenService, @inject(UserServiceBindings.USER_SERVICE) public userService: UserService<User, Credentials>, @inject(SecurityBindings.USER, {optional: true}) public users: UserProfile, ) {} @authenticate('jwt') @authorize({allowedRoles: ['admin'], voters: [basicAuthorization]}) aasync fund(){} }
Encountered similar issue recently. A little bit different is that instead of xADDomain, I'm using ActiveDirectoryDsc. And the error was gone when I upgraded OS to Windows Server 2019-Datacenter. A potential root cause might because of the version of Powershell between 2016 and 2019. Here's my log. Windows Server 2016 VERBOSE: [2020-06-01 03:47:34Z] Settings handler status to 'transitioning' (C:\Packages\Plugins\Microsoft.Powershell.DSC\2.80.0.0\Status\0.status) VERBOSE: [2020-06-01 03:47:34Z] Retrieving system information ... VERBOSE: [2020-06-01 03:47:40Z] OS Version : 10.0 VERBOSE: [2020-06-01 03:47:40Z] Server OS : True VERBOSE: [2020-06-01 03:47:40Z] 64-bit OS : True VERBOSE: [2020-06-01 03:47:40Z] PS Version : 5.1.14393.3471 VERBOSE: [2020-06-01 03:47:40Z] Validating user provided settings for the DSC Extension Handler ... And after reboot VERBOSE: [2020-06-01 03:53:05Z] Settings handler status to 'transitioning' (C:\Packages\Plugins\Microsoft.Powershell.DSC\2.80.0.0\Status\0.status) VERBOSE: [2020-06-01 03:53:05Z] Will continue the existing configuration. Executing Start-DscConfiguration with -UseExisting option ... VERBOSE: [2020-06-01 03:53:05Z] Settings handler status to 'transitioning' (C:\Packages\Plugins\Microsoft.Powershell.DSC\2.80.0.0\Status\0.status) VERBOSE: [2020-06-01 03:53:07Z] [VERBOSE] Perform operation 'Invoke CimMethod' with following parameters, ''methodName' = ApplyConfiguration,'className' = MSFT_DSCLocalConfigurationManager,'namespaceName' = root/Microsoft/Windows/DesiredStateConfiguration'. VERBOSE: [2020-06-01 03:53:07Z] [ERROR] WinRM cannot process the request. The following error with errorcode 0x80090350 occurred while using Negotiate authentication: An unknown security error occurred. Possible causes are: -The user name or password specified are invalid. -Kerberos is used when no authentication method and no user name are specified. -Kerberos accepts domain user names, but not local user names. -The Service Principal Name (SPN) for the remote computer name and port does not exist. -The client and remote computers are in different domains and there is no trust between the two domains. After checking for the above issues, try the following: -Check the Event Viewer for events related to authentication. -Change the authentication method; add the destination computer to the WinRM TrustedHosts configuration setting or use HTTPS transport. Note that computers in the TrustedHosts list might not be authenticated. -For more information about WinRM configuration, run the following command: winrm help config. VERBOSE: [2020-06-01 03:53:07Z] [VERBOSE] Operation 'Invoke CimMethod' complete. VERBOSE: [2020-06-01 03:53:07Z] [VERBOSE] Time taken for configuration job to complete is 0.039 seconds Windows Server 2019 VERBOSE: [2020-06-01 08:33:17Z] Settings handler status to 'transitioning' (C:\Packages\Plugins\Microsoft.Powershell.DSC\2.80.0.0\Status\0.status) VERBOSE: [2020-06-01 08:33:18Z] Retrieving system information ... VERBOSE: [2020-06-01 08:33:22Z] OS Version : 10.0 VERBOSE: [2020-06-01 08:33:22Z] Server OS : True VERBOSE: [2020-06-01 08:33:22Z] 64-bit OS : True VERBOSE: [2020-06-01 08:33:22Z] PS Version : 5.1.17763.1007 VERBOSE: [2020-06-01 08:33:22Z] Validating user provided settings for the DSC Extension Handler ... And after reboot VERBOSE: [2020-06-01 08:38:49Z] Settings handler status to 'transitioning' (C:\Packages\Plugins\Microsoft.Powershell.DSC\2.80.0.0\Status\0.status) VERBOSE: [2020-06-01 08:38:49Z] Will continue the existing configuration. Executing Start-DscConfiguration with -UseExisting option ... VERBOSE: [2020-06-01 08:38:50Z] Settings handler status to 'transitioning' (C:\Packages\Plugins\Microsoft.Powershell.DSC\2.80.0.0\Status\0.status) VERBOSE: [2020-06-01 08:38:51Z] [VERBOSE] Perform operation 'Invoke CimMethod' with following parameters, ''methodName' = ApplyConfiguration,'className' = MSFT_DSCLocalConfigurationManager,'namespaceName' = root/Microsoft/Windows/DesiredStateConfiguration'. VERBOSE: [2020-06-01 08:38:51Z] [VERBOSE] An LCM method call arrived from computer adPDC with user sid S-1-5-18.
Nuget won't work out of the box with AppSync subscriptions, so you will need to write your own client code for that, like you attempted in the second (non-nuget) example. Now, for the second example, take a second look at the python example referenced in your question. There are several steps that are not included in your code. I will enumerate the required steps and try to port them to C# from the python code (note that I don't have a C# environment at hand so there might be syntax errors, but this code should be pretty close to what you need) Step 0 - AppSync Endpoints Assume the result of invoking aws appsync get-graphql-api --api-id example123456 for your API is: { "graphqlApi": { "name": "myNewRealTimeGraphQL-API", "authenticationType": "<API_KEY>", "tags": {}, "apiId": "example123456", "uris": { "GRAPHQL": "https://abc.appsync-api.us-west-2.amazonaws.com/graphql", "REALTIME": "wss://abc.appsync-realtime-api.us-west-2.amazonaws.com/graphql" }, "arn": "arn:aws:appsync:us-west-2: xxxxxxxxxxxx:apis/xxxxxxxxxxxx" } } Step 1 - Build the connection URL Step 2 - Connect to WebSocket Endpoint This includes sending a connection_init message as per the protocol mentioned in the python article Step 3 - Wait for connection_ack as per protocol Again, this is as per protocol Step 4 - Register subscription Step 5 - Send mutation This step is not in this response, but can be done through the AWS console Step 6 - Wait for "data" messages These are the real-time events sent by AppSync Step 7 - Deregister subscription Step 8 - Disconnect // These are declared at the same level as your _client // This comes from the graphqlApi.uris.GRAPHQL in step 0, set as a var here for clarity _gqlHost = "abc.appsync-api.us-west-2.amazonaws.com"; // This comes from the graphqlApi.uris.REALTIME in step 0, set as a var here for clarity _realtimeUri = "wss://abc.appsync-realtime-api.us-west-2.amazonaws.com/graphql"; _apiKey = "<API KEY>"; static public async Task CallWebsocket() { // Step 1 // This is JSON needed by the server, it will be converted to base64 // (note: might be better to use something like Json.NET for this task) var header = var test = $@"{{ ""host"":""{_gqlHost}"", ""x-api-key"": ""{_apiKey}"" }}"; // Now we need to encode the previous JSON to base64 var headerB64 = System.Convert.ToBase64String( System.Text.Encoding.UTF8.GetBytes(header)); UriBuilder connectionUriBuilder = new UriBuilder(_realtimeUri); connectionUriBuilder.Query = $"header={headerB64}&payload=e30="; try { _client = new ClientWebSocket(); _client.Options.AddSubProtocol("graphql-ws"); // Step 2 await _client.ConnectAsync(connectionUriBuilder.Uri), CancellationToken.None); // Step 3 await SendConnectionInit(); await Receive(); } catch(Exception ex) { } } static private async Task SendConnectionInit() { ArraySegment<byte> outputBuffer = new ArraySegment<byte>(Encoding.UTF8.GetBytes(@"{""type"": ""connection_init""}")); await _client.SendAsync(outputBuffer, WebSocketMessageType.Text, true, CancellationToken.None); } static private async Task SendSubscription() { // This detail is important, note that the subscription is a stringified JSON that will be embeded in the "data" field below var subscription = $@"{{\""query\"": \""subscription SubscribeToEventComments{{ subscribeToEventComments{{ content }} }}\"", \""variables\"": {{}} }}"; var register = $@"{{ ""id"": ""<SUB_ID>"", ""payload"": {{ ""data"": ""{subscription}"", ""extensions"": {{ ""authorization"": {{ ""host"": ""{_gqlHost}"", ""x-api-key"":""{_apiKey}"" }} }} }}, ""type"": ""start"" }}"; // The output should look like below, note again the "data" field contains a stringified JSON that represents the subscription /* { "id": "<SUB_ID>", "payload": { "data": "{\"query\": \"subscription SubscribeToEventComments{ subscribeToEventComments{ content}}\", \"variables\": {} }", "extensions": { "authorization": { "host": "abc.appsync-api.us-west-2.amazonaws.com", "x-api-key":"<API KEY>" } } }, "type": "start" } */ ArraySegment<byte> outputBuffer = new ArraySegment<byte>(Encoding.UTF8.GetBytes(register)); await _client.SendAsync(outputBuffer, WebSocketMessageType.Text, true, CancellationToken.None); } static private async Task Deregister() { var deregister = $@"{{ ""type"": ""stop"", ""id"": ""<SUB_ID>"" }}" ArraySegment<byte> outputBuffer = new ArraySegment<byte>(Encoding.UTF8.GetBytes(deregister)); await _client.SendAsync(outputBuffer, WebSocketMessageType.Text, true, CancellationToken.None); } static private async Task Receive() { while (_socket.State == WebSocketState.Open) { ArraySegment<Byte> buffer = new ArraySegment<byte>(new Byte[8192]); WebSocketReceiveResult result= null; using (var ms = new MemoryStream()) { // This loop is needed because the server might send chunks of data that need to be assembled by the client // see: https://stackoverflow.com/questions/23773407/a-websockets-receiveasync-method-does-not-await-the-entire-message do { result = await socket.ReceiveAsync(buffer, CancellationToken.None); ms.Write(buffer.Array, buffer.Offset, result.Count); } while (!result.EndOfMessage); ms.Seek(0, SeekOrigin.Begin); using (var reader = new StreamReader(ms, Encoding.UTF8)) { // convert stream to string var message = reader.ReadToEnd(); Console.WriteLine(message) // quick and dirty way to check response if (message.Contains("connection_ack")) { // Step 4 await SendSubscription(); } else if (message.Contains("data")) // Step 6 { // Step 7 await Deregister(); // Step 8 await _client.CloseOutputAsync(WebSocketCloseStatus.NormalClosure, string.Empty, CancellationToken.None); } } } } }
To trigger PermissionEvaluator , you have to use hasPermission() in @PreAuthorize. There are 2 versions of hasPermission() which are : (1) @PreAuthorize("hasPermission('foo' ,'bar')") which will call boolean hasPermission(Authentication authentication, Object targetDomainObject,Object permission); /** targetDomainObject = 'foo', permission = 'bar' **/ (2) @PreAuthorize("hasPermission('foo' ,'bar','baz')") which will call boolean hasPermission(Authentication authentication, Serializable targetId, String targetType, Object permission); /** targetId = 'foo' , targetType = 'bar' , permission = 'baz' **/ In both cases , the Authentication parameter is the Authentication token get from the SecurityContext. One thing to note is that when configuring @PreAuthorize("hasPermission()"), you can use #foo , @P or @Param from spring data to specify which argument in the protected method will be used to invoke the PermissionEvaluator . See this for more details. In your case , you could do something like: @PreAuthorize("hasPermission('#id', 'getOrder')") public EntityModel<Order> getOrders(@PathVariable Long id) { } and the PermissionEvaluator looks like : public class MyPermissionEvaluator implements PermissionEvaluator { @Override public boolean hasPermission(Authentication auth, Object targetDomainObject, Object permission) { MyAuthentication myAuth = (MyAuthentication) auth; Long targetId = (Long) id; String permssionStr = (String) permission; if(permssionStr.equals("getOrder")){ return myAuth.getUserId().equals(targetId); }else if(permssionStr.equals("xxxx"){ //other permission checking } } } Please note that it assumes you also customize the Authentication token to be MyAuthentication which includes userID. It also answers your 2nd concern is that you can customize the authentication process to return a customzied Authentication token which you can set the userId into it just after loading the user record for authentication. In this way , the userId will be stored inside the MyAuthentication which you do not need to query it again in the PermissionEvaluator. Alternatively , you can also consider to directly express the authorisation logic in the @PreAuthorize without using hasPermission() for such simple case: @PreAuthorize("#id == authentication.userId") public EntityModel<Order> getOrders(@PathVariable Long id) { }
When Gradle encounters two different versions of the same dependency, it will perform a conflict resolution. It defaults to choosing the highest version number. However, because many libraries like Jackson consists of a number of individual modules like jackson-databind and jackson-core, you may end up in a situation where there is a mismatch between the different versions. To align them, you can use the Jackson BOM and Gradle's platform dependency mechanism. It looks like this (choose only one of the depencendies below): dependencies { // Enforce the specified version implementation(enforcedPlatform("com.fasterxml.jackson:jackson-bom:2.10.4")) // Align all modules to the same version, but allow upgrade to a higher version implementation(platform("com.fasterxml.jackson:jackson-bom:2.10.4")) } You don't need to exclude anything from your other dependencies. If you encounter problems with the use of Jackson after upgrading, you should have a look at the release notes for 2.10 and check if you might be hit by any of the compatibility changes. Of cause, if the problem is in a third-party library, it might be more difficult to fix. But you may try the latest version in the 2.9 line (which is 2.9.10 at this time) and see if the vulnerability is fixed here.
After a day's research and trial, the best I could come up with was to keep files separate in the repo, but then combine multiple files together in the CI/CD pipeline before running it against the DB. I created a template to combine matching files into a single file in the staging directory, publish it for debugging the pipeline, then execute it against the SQL server. The template is: # Template for executing all SQL files matching a string search parameters: - name: path #$path = "$(System.DefaultWorkingDirectory)\Functions" type: string - name: match #$match = "BASE_*.sql" type: string - name: outPath #$outPath = "$(System.DefaultWorkingDirectory)\Functions" type: string - name: outName #$outName = "BASE.sql" type: string steps: - task: PowerShell@2 inputs: targetType: 'inline' script: | echo Source Files: Get-ChildItem ${{parameters.path}} -include ${{parameters.match}} -rec displayName: 'Files to process: ${{parameters.match}}' - task: PowerShell@2 inputs: targetType: 'inline' script: | echo Creating: ${{parameters.outPath}}\${{parameters.outName}} Get-ChildItem ${{parameters.path}} -include ${{parameters.match}} -rec | ForEach-Object {gc $_; ""} | out-file ${{parameters.outPath}}\${{parameters.outName}} displayName: 'Combine: ${{parameters.outName}}' - task: PublishPipelineArtifact@1 inputs: targetPath: '${{parameters.outPath}}\${{parameters.outName}}' artifact: '${{parameters.outName}}' publishLocation: 'pipeline' displayName: 'Publish: ${{parameters.outName}}' - task: SqlDacpacDeploymentOnMachineGroup@0 inputs: TaskType: 'sqlQuery' SqlFile: '${{parameters.outPath}}\${{parameters.outName}}' ServerName: '$(SQL_ServerName).database.windows.net' DatabaseName: '$(SQL_DatabaseName)' AuthScheme: 'sqlServerAuthentication' SqlUsername: '$(SQL_UserName)' SqlPassword: '$(SQL_Password)' displayName: 'Create or Alter: ${{parameters.outName}}' - task: PowerShell@2 inputs: targetType: 'inline' script: Remove-Item ${{parameters.path}}\${{parameters.match}} -Recurse displayName: 'Delete Files: ${{parameters.match}}' The main pipeline then calls the template with the different search strings. trigger: - master pool: vmImage: 'windows-latest' steps: - task: PowerShell@2 inputs: targetType: 'inline' script: MKDIR "$(System.DefaultWorkingDirectory)\\Combined\\Functions" displayName: 'Create Output Folder' - template: azTemplate/CombineAndRunSQLFiles.yml # Functions: UTIL parameters: path: "$(System.DefaultWorkingDirectory)\\Functions" match: "UTIL_*.sql" outPath: "$(System.DefaultWorkingDirectory)\\Combined\\Functions" outName: "UTIL.sql" - template: azTemplate/CombineAndRunSQLFiles.yml # Functions: BASE parameters: path: "$(System.DefaultWorkingDirectory)\\Functions" match: "BASE_*.sql" outPath: "$(System.DefaultWorkingDirectory)\\Combined\\Functions" outName: "BASE.sql" Result: Pool: Azure Pipelines Image: windows-latest Agent: Hosted Agent Started: Today at 9:55 AM Duration: 1m 6s Job preparation parameters 5 artifacts produced Job live console data: Finishing: Job
If you want to implement a custom behavior when authentication process (with remember me feature) is success you can try: CustomRememberMeAuthenticationFilter Define a new filter such as: public class CustomRememberMeAuthenticationFilter extends RememberMeAuthenticationFilter { @Override protected void onSuccessfulAuthentication(final HttpServletRequest request, final HttpServletResponse response, final Authentication authResult) { super.onSuccessfulAuthentication(request, response, authResult); if (authResult != null) { // process post authentication logic here.. } } } Set the customer filer in security chain: @Override protected void configure(HttpSecurity http) throws Exception { http .csrf().disable() .authorizeRequests() .antMatchers("/","/login*").permitAll() //... http .addFilter(rememberMeAuthenticationFilter()) //... } @Bean protected RememberMeAuthenticationFilter rememberMeAuthenticationFilter(){ return new CustomRememberMeAuthenticationFilter(authenticationManager(),rememberMeServices()); } Check this in order to create your (authenticationManager(),rememberMeServices() In the previous snippet, custom filter is just added. If does not works, you must research and find the exact Filter in the chain to insert your custom filter: addFilterBefore, addFilterAfter, addFilterAt. Check this add filter methods Finally remove the default http.rememberMe() in order to use your own filter. Because the remember-me namespace element already inserts a RememberMeAuthenticationFilter so it will still take precedence over yours, since it comes before it in the filter chain. References https://github.com/DGYao/spring-boot-demo/blob/master/src/main/java/com/springboot/web/WebSecurityConfigurer.java https://craftingjava.com/blog/user-management-remember-me-jwt-token/ How can I use a custom configured RememberMeAuthenticationFilter in spring security? https://www.baeldung.com/spring-security-remember-me https://www.baeldung.com/spring-security-custom-filter#1-java-configuration https://stackoverflow.com/a/22668530/3957754 https://docs.spring.io/spring-security/site/docs/3.1.x/reference/springsecurity-single.html#remember-me-impls How can I use a custom configured RememberMeAuthenticationFilter in spring security? https://docs.spring.io/spring-security/site/migrate/current/3-to-4/html5/migrate-3-to-4-jc.html persisted remember-me authentication after using custom filter https://www.codejava.net/coding/how-to-implement-remember-password-remember-me-for-java-web-application Spring Security custom RememberMeAuthenticationFilter not getting fired
Simple answers are Yes(App has encryption) and Yes(App uses Exempt encryption). In my application, I am just opening my company's website in WKWebView but as it uses "https", it will be considered as exempt encryption. Apple document for more info: https://developer.apple.com/documentation/security/complying_with_encryption_export_regulations?language=objc Alternatively, you can just add key "ITSAppUsesNonExemptEncryption" and value "NO" in your app's info.plist file. and this way iTunes connect won't ask you that questions anymore. More info: https://developer.apple.com/documentation/bundleresources/information_property_list/itsappusesnonexemptencryption?language=objc You can follow these 3 simple steps to verify if your application is exempt or not: https://help.apple.com/app-store-connect/#/dev63c95e436 You may need to submit this annual-self-classification to US gov. For more info: https://www.bis.doc.gov/index.php/policy-guidance/encryption/4-reports-and-reviews/a-annual-self-classification
This method: public void AuthenticateUser(AuthorizedModel model) { var identity = new ClaimsIdentity(new [] { //Some my claims ... }); var user = new ClaimsPrincipal(identity); NotifyAuthenticationStateChanged(Task.FromResult(new AuthenticationState(user))); } Should be: public void AuthenticateUser() { // If AuthorizedModel model contains a Jwt token or whatever which you // save in the // local storage, then add it back as a parameter to the AuthenticateUser // and place here the logic to save it in the local storage // After which call NotifyAuthenticationStateChanged method like this. NotifyAuthenticationStateChanged(GetAuthenticationStateAsync()); } Note: The call to StateHasChanged method has got nothing to do with the current issue. The call to the base class's NotifyAuthenticationStateChanged is done so that the base class, that is AuthenticationStateProvider, invokes the AuthenticationStateChanged event, passing the AuthenticationState object to subscribers, in that case, to the CascadingAuthenticationState component, tell him to refresh it's data (AuthenticationState) Note: If the issue still persists in spite of the above changes, ensure that you add to the DI container the following: services.AddScoped<CustomAuthenticationStateProvider>(); services.AddScoped<AuthenticationStateProvider>(provider => provider.GetRequiredService<CustomAuthenticationStateProvider>()); Hope this helps...
I had the same problem and solved it by adding my own PolicyScheme that decides which type of authentication should be used based on the request path. All my razor pages have a path starting with "/Identity" or "/Server" and all other requests should use JWT. I set this up in ConfigureServices using the collowing coding: // Add authentication using JWT and add a policy scheme to decide which type of authentication should be used services.AddAuthentication() .AddIdentityServerJwt() .AddPolicyScheme("ApplicationDefinedAuthentication", null, options => { options.ForwardDefaultSelector = (context) => { if (context.Request.Path.StartsWithSegments(new PathString("/Identity"), StringComparison.OrdinalIgnoreCase) || context.Request.Path.StartsWithSegments(new PathString("/Server"), StringComparison.OrdinalIgnoreCase)) return IdentityConstants.ApplicationScheme; else return IdentityServerJwtConstants.IdentityServerJwtBearerScheme; }; }); // Use own policy scheme instead of default policy scheme that was set in method AddIdentityServerJwt services.Configure<AuthenticationOptions>(options => options.DefaultScheme = "ApplicationDefinedAuthentication");
Spring Security provides various default security attack implementation to make sure the application is secured. Since you asked to include some basic and solid security measures. Below are a few of my thoughts which can improve a bit. As you told, You have disabled 'CSRF token' which is not good when you think your application should be highly secured. Usually, most of the people disable(in demo code) because they won't be able to call /logout URL with the GET method as it requires you to submit it via POST with _csrf token. Good that you have taken care of in production. Session Fixation Attack: This is the type of attack where one can steal your current session by offering their URL of the same website and append JSESSIONID into URL, with the URL rewrite approach. Spring Security Framework has taken care of this by default and it migrates the session once the user logs in. The corresponding configuration would be - http.sessionManagement() .sessionFixation().migrateSession() Securing session cookie: Malicious script can read your cookie information from the browser end so you need to make sure that your cookie is secured and are accessible by server-side code by making them HttpOnly. For that, you can use the below config in your application.properties - server.servlet.session.cookie.http-only=true Running your app on Https: Make sure that you use https in production and also in that case you can force your cookies to travel over https protocol only by adding below config in your application.properties. server.servlet.session.cookie.secure=true and to force https connection add below lines in configure() method (this won't be enough though because you have to get your public/private key setup also using keytool) http.requiresChannel().requiresSecure(); Applying CSP: User Content security policy to avoid any XSS attacks. Spring security by default provides various security headers. But it does not add Content security policy headers you can add them in your security config file like below @EnableWebSecurity public class WebSecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http.headers().contentSecurityPolicy("script-src 'self' https://myclientscriptlocation.example.com; object-src https://myclientsideplugins.example.com; report-uri /cspreport-endpoint/"); } } Password hashing: Which you are not using your security config. You have to keep password hashed while storing them into the database. Securing your application.properties' Security should be applied not only from outsiders, but it should also be protected from insiders as well. Like encryption and decryption of database passwords or any other config passwords. Follow here on how to secure your application properties. GET Endpoints: Note that /users/** contains some GET endpoints containing User Information, can I apply limitations to who visits them? Yes, you can apply. But that depends on your requirement what you want here. One example that I can think of is, IP Address filtering. Like if you want only those users can access which are in the US or if you know the IP range of user etc. .antMatchers("/foos/**").hasIpAddress("xx.xxx.xxx.xx") POST Endpoints: I've also found some ways to secure POST by using a JSON Web Token , is it a best practice? JWT mostly used in RESTful web services. If your application is exposing rest endpoints and requires authenticated access then JWT is the best option. Spring also provides OAuth2.0, RSA, LDAP, dependencies to enhance security. These are different ways of authentication and authorization. Some of them has multiple flows to do authentication and authorization but The same security factors would be applied to these when they are accessed by outside the users. It totally depends on your project requirement whether you need them or not. For example, if you are developing an application for internal organization use where user/employee has everything set up at the organization level and you want everyone to access this application then LDAP integration is better. OAuth2.0 is better when you have multiple microservices + you want any social login implementation like Login with Google or Login with Facebook then you can follow OAuth2.0 integration Does these prevent DDOS attacks as well as brute force attacks? No. This should be taken care of by tuning various security parameters like limiting the session time, checking security headers, handling memory leaks, applying timeout for POST requests so that no one could post a huge request payload, etc. You have to do a bit of leg work to mitigate such security attacks. PS: Remove permitAll() from security configuration. .defaultSuccessUrl("/dashboard",true) .permitAll()
I know this is an old question, but I found myself here with the same problem, and information about this is surprisingly thin on the ground. Likely as Microsoft recommend using (2FA) authenticator apps, using a Time-based One-time Password Algorithm (TOTP) rather than an OTP with SMS/Email. Not the intended purpose, but nevertheless the following will allow you to generate and save a time limited (3 minutes) 6 digit OTP, associate it with a user and then use it to verify them using ASP.NET Core Identity. GenerateChangePhoneNumberTokenAsync var code = await _userManager.GenerateChangePhoneNumberTokenAsync(user, model.PhoneNumber); https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.identity.usermanager-1.generatechangephonenumbertokenasync and VerifyChangePhoneNumberTokenAsync bool valid = await _userManager.VerifyChangePhoneNumberTokenAsync(user, code, model.PhoneNumber); https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.identity.usermanager-1.verifychangephonenumbertokenasync This can be seen being implemented in the documentation posted by Erik & paulsm4 https://docs.microsoft.com/en-us/aspnet/core/security/authentication/2fa?view=aspnetcore-1.1&viewFallbackFrom=aspnetcore-3.1 A link to the code https://github.com/dotnet/AspNetCore.Docs/tree/master/aspnetcore/security/authentication/2fa/sample/Web2FA A link to the controller where this is implemented https://github.com/dotnet/AspNetCore.Docs/blob/master/aspnetcore/security/authentication/2fa/sample/Web2FA/Controllers/ManageController.cs
There is wide spread issue in at AWS affecting IAM. UPDATE: [RESOLVED] Increased API Error Rates Between June 11 9:56 PM PDT and June 12 6:40 AM PDT, AWS IAM experienced increased error rates and latencies on the AWS IAM CreateRole and CreateServiceLinkedRoles APIs. The issue has been resolved and the service is operating normally. From https://status.aws.amazon.com/: Increased IAM API Error Rates We have identified the root cause of the increased error rates and latencies on the AWS IAM CreateRole and CreateServiceLinkedRole APIs and are working towards resolution. Other AWS services such as AWS CloudFormation whose features require these actions may also be impacted. User authentications and authorizations are not impacted. 11:30 PM PDT We are investigating increased error rates and latencies on AWS IAM administrative APIs with potential impact in multiple regions. IAM role creation is impacted. Other AWS services whose features require these actions may also be impacted. User authentications and authorizations are not impacted. Jun 12, 12:03 AM PDT We continue to investigate increased error rates and latencies on AWS IAM administrative APIs with potential impact in multiple regions. IAM role creation is impacted. Other AWS services like AWS CloudFormation whose features require these actions may also be impacted. User authentications and authorizations are not impacted. Jun 12, 2:12 AM PDT We have identified the root cause of the increased error rates and latencies on the AWS IAM CreateRole and CreateServiceLinkedRole APIs and are working towards resolution. Other AWS services such as AWS CloudFormation whose features require these actions may also be impacted. User authentications and authorizations are not impacted. Jun 12, 3:30 AM PDT We wanted to provide you with more details on the issue causing increased error rates and latencies on the AWS IAM CreateRole and CreateServiceLinkedRole APIs. While we have identified the root cause and are working towards resolution, with an issue like this, it is always difficult to provide an accurate ETA, but we expect to restore access to the CreateRole and CreateServiceLinkedRole APIs within the next several hours. We are working through the recovery process now and will continue to keep you updated if this ETA changes. IAM user authentications and authorizations are not impacted. Other AWS services like AWS CloudFormation whose features require these actions may also be impacted.
This article helped me get things squared away on my Windows 10 box: https://richardballard.co.uk/ssh-keys-on-windows-10/ Note: The first section in the article should be titled "Enable the SSH Client in Windows" and should refer to getting the SSH Client enabled, not the server. If you can get ssh -T [email protected] working without prompting you for your password, as described in the above article, then you'll be able to push from IntelliJ just fine. The keys were to: get the OpenSSH Authentication Agent service running in Windows make sure ssh-add that is invoked is the one provided in C:\Windows\System32\OpenSSH make sure git is also configured to use the ssh provided by Windows: git config --global core.sshCommand "'C:\Windows\System32\OpenSSH\ssh.exe'" When generating your keys with ssh-keygen consider using the ecdsa algorithm as described here: https://www.ssh.com/ssh/keygen/ Also important was cleaning up anything else that was trying to do ssh outside of windows (like Putty). One problem I kept facing with 'invalid format' when trying to run ssh-add I believe was caused by a different ssh-add program on my path that was being used rather than the one from OpenSSH that ships with Windows.
Now, some weeks later I've learned a lot. First, you need to differentiate between UI and the logical script. Second, whether it is a container-bound or stand-alone script. A container-bound script is bound to Google Spreadsheet, Google Doc or any other UI that allows user interaction. In such case, you can access the UI in the code and add custom menus to the UI that will invoke methods in your script once the user clicks on that menu. The disadvantage is that you need to know if it is a Spreadsheet or Doc since the UI class differs. You also need to instruct the user to enter his or her credentials using the custom menu. There is a very nice instruction online. The following code snipped is inspired by the instruction. Make sure to create a trigger for onOpen. var ui = SpreadsheetApp.getUi(); var userProperties = PropertiesService.getUserProperties(); const API_KEY = 'api.key'; function onOpen(){ ui.createMenu('Credentials & Authentication') .addItem('Set API key', 'setKey') .addItem('Delete API key', 'resetKey') .addItem('Delete all credentials', 'deleteAll') .addToUi(); } function setKey(){ var scriptValue = ui.prompt('Please provide your API key.' , ui.ButtonSet.OK); userProperties.setProperty(API_KEY, scriptValue.getResponseText()); } function resetKey(){ userProperties.deleteProperty(API_KEY); } function deleteAll(){ userProperties.deleteAllProperties(); } For a standalone script you need to find any other way to connect to the UI. In my situation I was implementing a custom connector for Google Data Studio for which there is a very nice example online as well. There is a quite detailed instruction on authentication and an API reference on authentication as well. This custom connector for Kaggle was very helpful as well. It is open-source on the Google Data Studio GitHub. The following demo code is inspired by those examples. Have a look at getCredentials, validateCredentials, getAuthType, resetAuth, isAuthValid and setCredentials. var cc = DataStudioApp.createCommunityConnector(); const URL_DATA = 'https://www.myverysecretdomain.com/api'; const URL_PING = 'https://www.myverysecretdomain.com/ping'; const AUTH_USER = 'auth.user' const AUTH_KEY = 'auth.key'; const JSON_TAG = 'user'; String.prototype.format = function() { // https://coderwall.com/p/flonoa/simple-string-format-in-javascript a = this; for (k in arguments) { a = a.replace("{" + k + "}", arguments[k]) } return a } function httpGet(user, token, url, params) { try { // this depends on the URL you are connecting to var headers = { 'ApiUser': user, 'ApiToken': token, 'User-Agent': 'my super freaky Google Data Studio connector' }; var options = { headers: headers }; if (params && Object.keys(params).length > 0) { var params_ = []; for (const [key, value] of Object.entries(params)) { var value_ = value; if (Array.isArray(value)) value_ = value.join(','); params_.push('{0}={1}'.format(key, encodeURIComponent(value_))) } var query = params_.join('&'); url = '{0}?{1}'.format(url, query); } var response = UrlFetchApp.fetch(url, options); return { code: response.getResponseCode(), json: JSON.parse(response.getContentText()) } } catch (e) { throwConnectorError(e); } } function getCredentials() { var userProperties = PropertiesService.getUserProperties(); return { username: userProperties.getProperty(AUTH_USER), token: userProperties.getProperty(AUTH_KEY) } } function validateCredentials(user, token) { if (!user || !token) return false; var response = httpGet(user, token, URL_PING); if (response.code == 200) console.log('API key for the user %s successfully validated', user); else console.error('API key for the user %s is invalid. Code: %s', user, response.code); return response; } function getAuthType() { var cc = DataStudioApp.createCommunityConnector(); return cc.newAuthTypeResponse() .setAuthType(cc.AuthType.USER_TOKEN) .setHelpUrl('https://www.myverysecretdomain.com/index.html#authentication') .build(); } function resetAuth() { var userProperties = PropertiesService.getUserProperties(); userProperties.deleteProperty(AUTH_USER); userProperties.deleteProperty(AUTH_KEY); console.info('Credentials have been reset.'); } function isAuthValid() { var credentials = getCredentials() if (credentials == null) { console.info('No credentials found.'); return false; } var response = validateCredentials(credentials.username, credentials.token); return (response != null && response.code == 200); } function setCredentials(request) { var credentials = request.userToken; var response = validateCredentials(credentials.username, credentials.token); if (response == null || response.code != 200) return { errorCode: 'INVALID_CREDENTIALS' }; var userProperties = PropertiesService.getUserProperties(); userProperties.setProperty(AUTH_USER, credentials.username); userProperties.setProperty(AUTH_KEY, credentials.token); console.info('Credentials have been stored'); return { errorCode: 'NONE' }; } function throwConnectorError(text) { DataStudioApp.createCommunityConnector() .newUserError() .setDebugText(text) .setText(text) .throwException(); } function getConfig(request) { // ToDo: handle request.languageCode for different languages being displayed console.log(request) var params = request.configParams; var config = cc.getConfig(); // ToDo: add your config if necessary config.setDateRangeRequired(true); return config.build(); } function getDimensions() { var types = cc.FieldType; return [ { id:'id', name:'ID', type:types.NUMBER }, { id:'name', name:'Name', isDefault:true, type:types.TEXT }, { id:'email', name:'Email', type:types.TEXT } ]; } function getMetrics() { return []; } function getFields(request) { Logger.log(request) var fields = cc.getFields(); var dimensions = this.getDimensions(); var metrics = this.getMetrics(); dimensions.forEach(dimension => fields.newDimension().setId(dimension.id).setName(dimension.name).setType(dimension.type)); metrics.forEach(metric => fields.newMetric().setId(metric.id).setName(metric.name).setType(metric.type).setAggregation(metric.aggregations)); var defaultDimension = dimensions.find(field => field.hasOwnProperty('isDefault') && field.isDefault == true); var defaultMetric = metrics.find(field => field.hasOwnProperty('isDefault') && field.isDefault == true); if (defaultDimension) fields.setDefaultDimension(defaultDimension.id); if (defaultMetric) fields.setDefaultMetric(defaultMetric.id); return fields; } function getSchema(request) { var fields = getFields(request).build(); return { schema: fields }; } function convertValue(value, id) { // ToDo: add special conversion if necessary switch(id) { default: // value will be converted automatically return value[id]; } } function entriesToDicts(schema, data, converter, tag) { return data.map(function(element) { var entry = element[tag]; var row = {}; schema.forEach(function(field) { // field has same name in connector and original data source var id = field.id; var value = converter(entry, id); // use UI field ID row[field.id] = value; }); return row; }); } function dictsToRows(requestedFields, rows) { return rows.reduce((result, row) => ([...result, {'values': requestedFields.reduce((values, field) => ([...values, row[field]]), [])}]), []); } function getParams (request) { var schema = this.getSchema(); var params; if (request) { params = {}; // ToDo: handle pagination={startRow=1.0, rowCount=100.0} } else { // preview only params = { limit: 20 } } return params; } function getData(request) { Logger.log(request) var credentials = getCredentials() var schema = getSchema(); var params = getParams(request); var requestedFields; // fields structured as I want them (see above) var requestedSchema; // fields structured as Google expects them if (request) { // make sure the ordering of the requested fields is kept correct in the resulting data requestedFields = request.fields.filter(field => !field.forFilterOnly).map(field => field.name); requestedSchema = getFields(request).forIds(requestedFields); } else { // use all fields from schema requestedFields = schema.map(field => field.id); requestedSchema = api.getFields(request); } var filterPresent = request && request.dimensionsFilters; //var filter = ... if (filterPresent) { // ToDo: apply request filters on API level (before the API call) to minimize data retrieval from API (number of rows) and increase speed // see https://developers.google.com/datastudio/connector/filters // filter = ... // initialize filter // filter.preFilter(params); // low-level API filtering if possible } // get HTTP response; e.g. check for HTTT RETURN CODE on response.code if necessary var response = httpGet(credentials.username, credentials.token, URL_DATA, params); // get JSON data from HTTP response var data = response.json; // convert the full dataset including all fields (the full schema). non-requested fields will be filtered later on var rows = entriesToDicts(schema, data, convertValue, JSON_TAG); // match rows against filter (high-level filtering) //if (filter) // rows = rows.filter(row => filter.match(row) == true); // remove non-requested fields var result = dictsToRows(requestedFields, rows); console.log('{0} rows received'.format(result.length)); //console.log(result); return { schema: requestedSchema.build(), rows: result, filtersApplied: filter ? true : false }; } If none of this fits your requirements, then go with a WebApp as suggested in the other answer by @kessy.
Therefore if Tokens are never used, Sanctum is basically the same as the default Authentication method, am I correct? Yes, under the hood it uses laravel's default auth. Taking a look at the sanctum guard (below code taken fro github. It was last commited on Apr 11, sanctum 2.x) <?php namespace Laravel\Sanctum; use Illuminate\Contracts\Auth\Factory as AuthFactory; use Illuminate\Http\Request; class Guard { /** * The authentication factory implementation. * * @var \Illuminate\Contracts\Auth\Factory */ protected $auth; /** * The number of minutes tokens should be allowed to remain valid. * * @var int */ protected $expiration; /** * Create a new guard instance. * * @param \Illuminate\Contracts\Auth\Factory $auth * @param int $expiration * @return void */ public function __construct(AuthFactory $auth, $expiration = null) { $this->auth = $auth; $this->expiration = $expiration; } /** * Retrieve the authenticated user for the incoming request. * * @param \Illuminate\Http\Request $request * @return mixed */ public function __invoke(Request $request) { if ($user = $this->auth->guard(config('sanctum.guard', 'web'))->user()) { return $this->supportsTokens($user) ? $user->withAccessToken(new TransientToken) : $user; } if ($token = $request->bearerToken()) { $model = Sanctum::$personalAccessTokenModel; $accessToken = $model::findToken($token); if (! $accessToken || ($this->expiration && $accessToken->created_at->lte(now()->subMinutes($this->expiration)))) { return; } return $this->supportsTokens($accessToken->tokenable) ? $accessToken->tokenable->withAccessToken( tap($accessToken->forceFill(['last_used_at' => now()]))->save() ) : null; } } /** * Determine if the tokenable model supports API tokens. * * @param mixed $tokenable * @return bool */ protected function supportsTokens($tokenable = null) { return $tokenable && in_array(HasApiTokens::class, class_uses_recursive( get_class($tokenable) )); } } If you check the _invoke() method, if ($user = $this->auth->guard(config('sanctum.guard', 'web'))->user()) { return $this->supportsTokens($user) ? $user->withAccessToken(new TransientToken) : $user; } the authenticated user is found using $user = $this->auth->guard(config('sanctum.guard', 'web'))->user() After checking the sanctum config file, there is no sanctum.guard config currently (it's probably meant for some future version), so sanctum checks with the web guard by default, so it's basically doing the same thing as your default web routes. But you've misunderstood the use of Sanctum. Sanctum is for API authentication and not for web auth (though it can be used web auth as well). Sanctum's non-token auth is for your SPA's to be able to use the same API as mobile applications ( which use token authentication ) without needing tokens and providing the benefits of csrf and session based auth. To help you understand better, suppose you have build an API which uses tokens (if it's already using sanctum for tokens, that makes things simpler) for authentication. Now you wish to build an SPA ( which may be build inside the laravel project itself, or a seperate project, on same domain or on different domain ) which will use the same API's, but since this will be built by you, it is a trusted site so you don't want it to use tokens but instead use laravel's default session based auth along with csrf protection while also using the same api routes. The SPA will communicate with the server through ajax. You also want to ensure that only your SPA is allowed to use session based auth and not allow other third party sites to use it. So this is where Sanctum comes in. You would just need to add the Sanctum middleware to your api route group in app/Http/Kernel.php use Laravel\Sanctum\Http\Middleware\EnsureFrontendRequestsAreStateful; 'api' => [ EnsureFrontendRequestsAreStateful::class, 'throttle:60,1', \Illuminate\Routing\Middleware\SubstituteBindings::class, ], Then configure sanctum to allow your SPA's domain and configure cors (check the docs to learn how to do this). Then just add the auth:sanctum middleware to your route and you're done with the serverside setup. Now these routes will authenticate users if the request has a token or if it is stateful (session cookie). Now your SPA can communicate with your API without tokens. To get csrf protection, call the csrf-cookie request first, this will set up a csrf token in your cookies, and axios will automatically attach it to subsequent requests axios.get('/sanctum/csrf-cookie').then(response => { // Login... }) What is the difference between sanctum and passport since they do the same thing but Sanctum is said to be lightweight. Well it's just like it says, sanctum is lightweight. This is because Passport provides full Oauth functionality while Sanctum only focuses on creating and managing tokens. To explain Oauth in a simple way, you must have seen those Sign in with Google, Sign in with Facebook, Sign in with Github on different sites, and you can then sign it to those sites using your google/facebook/github account. This is possible because Google, Facebook and Github provide Oauth functionality (just a simple example, not going in to too much detail). For most websites, you don't really need Passport as it provides a lot features that you don't need. For simple api authentication Sanctum is more than enough
First of all, the best practice is to have one key per user per machine. That's the most secure approach, because it means you can remove access from one machine independent from the other, such as if one machine is lost or stolen. However, having said that, if you really want to do this and want to ignore best practices, you can copy the id_rsa and id_rsa.pub files to a different machine, and that should work. However, in this case, you generated the key on a newer machine which uses a different private key format or a more modern encryption algorithm for encrypting it then the older machine. The default encryption for older RSA keys, the PKCS #1 format, tends to leave a lot to be desired and isn't very secure. The easiest, simplest way to solve this problem is to generate a new Ed25519 key pair because those always use the OpenSSH format, and you can do that with ssh-keygen -t ed25519. If you want to then copy it, the files are ~/.ssh/id_ed25519 and ~/.ssh/id_ed25519.pub. This is also the most preferred key format these days, but if you're using something ancient like CentOS 6, then it may not be supported. If you don't want to do that, then you can convert the existing private key using ssh-keygen -i and ssh-keygen -e to convert your private key to the appropriate format. This should be done on the newer machine, the one that generated the key. The manual page documents the options and formats supported. You can use file on that machine to find out the format that the private key is in.
today I'm using Django 2.2, and I'd like to add WebSocket support to my project. If you want to add websocket support to your app, at the moment you don't need to upgrade to django 3.0. Django 2.2 plus channels can do that - and for the time being is the best way forward. (Although there's absolutely no harm in upgrading to django 3.0 if you don't have any good reason not to). I will try and further explain why in this answer. From what I understood, Django-Channels is a project that have been started outside of Django, and then, started to be integrated in the core Django. But the current state of this work remains confusing to me. Yes, my understanding is that channels started out as a project from one of the core Django developers (Andrew Godwin - who has also been instrumental in bringing about the async changes brought in Django 3.0). It is not included automatically if you just install Django, but it is officially part of the django project, and has been since september 2016 (see here). It's now on version 2.4 and so is an established and stable project that can be used to add websockets to your django app. So What's going on with Django 3.x and async? Whilst channels adds a way to add some async functionality to your django app, Django at it's core is still synchonous. The 'async' project that is being gradually introduced addresses this. The key thing to note here is that it's being introduced gradually. Django is made up of several layers: WSGI server (not actually part of django): deals with the protocol of actually accepting an HTTP request Base Handler: This takes the request passed to it from the server and makes sure it's sent through the middleware, and the url config, so that we end up with a django request object, and a view to pass it to. The view layer (which does whatever you tell it to) The ORM, and all the other lovely stuff you get with Django, that we can call from the view. Now to fully benefit from async, we really need all of these layers to be async, otherwise there won't really be any performance benefit. This is a fairly big project, hence why it is being rolled out gradually: With the release of django 3.0, all that was really added was the ability to talk to an ASGI sever, (rather than just a WSGI one). When Django 3.1 is released (expected august 2020) it is anticipated that there will be capabilities for asynchronous middleware and views. Then finally in django 3.2, or maybe even 4.0 we will get async capabilities up and down the whole of Django. Once we get to that final point, it may be worth considering using the async features of Django for stuff like web-sockets, but at the moment we can't even take advantage of the fact we can now deal with ASGI as well as WSGI servers. You can use Django with an ASGI server, but there would be no point as the base handler is still synchronous. TLDR Django channels adds a way to deal with protocols other than HTTP, and adds integrations into things such as django's session framework and authentication framework, so it's easy to add things like websockets to your django project. It is complete and you can start working with it today!!. Native async support is a fundemental re-write of the core of Django. This is a work in progress. It's very exciting, but won't be ready to really benefit from for a little while yet. There was a good talk given at last years djangoCon outlining the plans for async django. You can view it here.
The default api middleware stack is stateless. I don't know what your EnsureFrontendRequestsAreStateful::class does here. So there are a couple solutions to fix this: Add the same middleware as web Add some extra middleware to your 'api' group so your cookies will identify your session (only the first one is mandatory), just as it works with the 'web' middleware: StartSession::class, AuthenticateSession::class, ShareErrorsFromSession::class, Note that this makes your API stateful (since it now receives a session), which might be undesirable. Use the built-in token guard for authentication This is defined as the default driver for the api auth guard (see config/auth.php) (https://github.com/laravel/laravel/blob/master/config/auth.php#L45) It checks for the existence of (in this order): An ?api_token=xxx query parameter An api_token request body parameter An Authorization: Bearer xxx header A header named PHP_AUTH_PW See https://github.com/laravel/framework/blob/7.x/src/Illuminate/Auth/TokenGuard.php#L97 Illuminate\Auth\TokenGuard::97 to see how this happens. The token guard relies on an api_token column in your users table, which is checked against one of the items in the list above. See https://laravel.com/docs/6.x/api-authentication#database-preparation for an example migration on how to add this column. Create a custom Auth guard in a service provider boot() method As described here https://laravel.com/docs/7.x/authentication#closure-request-guards . For example, this one authenticates a user using an X-Api-Key header: Auth::viaRequest('custom-token', function ($request) { return User::where('token', $request->header('X-Api-Key')->first(); }); You can then assign the 'driver' => 'custom-token' in your config/auth.php file. Note that this all is dependent on the Illuminate\Auth\AuthServiceProvider::class, which should always be defined in your config/app.php['providers'] list. This is the basic service provider that makes sure that the Auth::user() functions etc. are available in your code. So if you want to require authentication for particular routes, you'd have to add the 'auth' middleware (which is a shorthand for \App\Http\Middleware\Authenticate::class as seen in app\Http\Kernel::$routeMiddleware) to either your 'api' middleware group for api-wide authentication, or to separate routes.
You can try using spring security and the built In support for keycloak. (This guide)[https://www.baeldung.com/spring-boot-keycloak] gives a pretty complete example. For completeness sake, here are the relevant excerpts from the guide. If you have not already, you will need to add springs keycloak dependencies to your pom.xml <dependencyManagement> <dependencies> <dependency> <groupId>org.keycloak.bom</groupId> <artifactId>keycloak-adapter-bom</artifactId> <version>3.3.0.Final</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> ... <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-spring-boot-starter</artifactId> </dependency> You can configure spring through the application properties. Then simply set up your Spring security configuration class extending the keycloak adapter. @Configuration @EnableWebSecurity @ComponentScan(basePackageClasses = KeycloakSecurityComponents.class) class SecurityConfig extends KeycloakWebSecurityConfigurerAdapter { @Autowired public void configureGlobal( AuthenticationManagerBuilder auth) throws Exception { KeycloakAuthenticationProvider keycloakAuthenticationProvider = keycloakAuthenticationProvider(); keycloakAuthenticationProvider.setGrantedAuthoritiesMapper( new SimpleAuthorityMapper()); auth.authenticationProvider(keycloakAuthenticationProvider); } @Bean public KeycloakSpringBootConfigResolver KeycloakConfigResolver() { return new KeycloakSpringBootConfigResolver(); } @Bean @Override protected SessionAuthenticationStrategy sessionAuthenticationStrategy() { return new RegisterSessionAuthenticationStrategy( new SessionRegistryImpl()); } }
You'd use the URL without any embedded authentication information, then handle credential lookup as part of the git interaction. The downside of this is that each person needing access has to independently configure their system correctly before things will work. (But throw 2FA and SSO into the mix, and there's probably no way around individual configuration.) One approach is relying on git using SSH, and use SSH's support for configuration to supply credentials on your behalf. This is the approach recommended in another answer. But that's not the only way: You can make things work with an HTTPS URL. To do so, you need to use git's built-in support for fetching credentials. This is what each team member would use to have an HTTPS URL that requires auth work without repeatedly demanding credentials from them. Git supports handling authorization using a credential helper. The helper used for a repo is configured in your git config as the value of credential.helper. The specific helpers available, and how you install and configure your chosen one, depends on your platform. GitHub has per-platform docs for setting this up. The Git Book also has a Credential Storage chapter.
Your problem has nothing to do with your class being templated. You are trying to implement your operator<< as a non-static member of your class, but that won't work for the binary stream operator<<, it needs to be a non-member instead. A binary operator implemented as a class member can only be a member of the class in its left-hand argument, which in this case is std::ostream, and your class is not the std::ostream class. As such, the operator needs to be a free function (non-member) instead. If you mark the binary operator<< as a friend inside the class, not only will it have access to the private members of your class, but more importantly the compiler will treat the operator as a free function in the surrounding namespace that your class is declared in, which is what you need, eg: template <class T> class Set{ ... public: ... // inlined implementation friend ostream& operator<<(ostream& s, const Set& b){ ... return s; } }; Or: template <class T> class Set{ ... // non-inlined implementation friend ostream& operator<<(ostream& s, const Set& b); }; template <class T> ostream& operator<<(ostream& s, const Set<T>& b){ ... return s; } On a side note, your addElement() has a potential buffer overflow. It should look more like this instead: void addElement(const T& t){ for(int i = 0; i < index; ++i){ if (data[i] == t){ cout << "This element (" << t << ") cannot be added since it is already in the elements" << endl; return; } } if (index >= 100){ cout << "This element (" << t << ") cannot be added since the elements is full" << endl; return; } data[index++] = t; } Also, your class is missing: a destructor, to delete[] the elements array. a copy constructor and copy-assignment operator, to make deep copies of the elements data between Set objects. (C++11 and later) a move constructor and move-assignment operator, to move the array between objects. Per the Rule of 3/5/0.
How about this modification? Modification points: When I checked the official document of Search of HubSpot API, I found the curl sample. When this sample is converted to Google Apps Script, I noticed several modification points in your script. UrlFetchApp.fetch has no properties of body and redirect. About followRedirects, the official document says as follows. If false the fetch doesn't automatically follow HTTP redirects; it returns the original HTTP response. The default is true. In your URL, https://api.hubapi.com/crm/v3/objects/contacts/search? is used. If you don't use the API key, how about modifying to https://api.hubapi.com/crm/v3/objects/contacts/search? When above modification is reflected to your script, it becomes as follows. Modified script: Please modify as follows. From: var options = { 'method' : 'post', headers: headers, 'contentType': 'application/json', // Convert the JavaScript object to a JSON string. body : raw, redirect: 'follow', "muteHttpExceptions": true }; To: var options = { method : 'post', headers: headers, contentType: 'application/json', payload : raw, muteHttpExceptions: true }; Note: Above modification is required for your script. But I'm worry about the error of Authentication credentials not found.. In this modification, it supposes that your access token of service.getAccessToken() can be used for this request. When I saw the official document, the API key can be also used. If the access token cannot be used, how about using the API key? It's like below. https://api.hubapi.com/crm/v3/objects/contacts/search?hapikey=YOUR_HUBSPOT_API_KEY References: Search of HubSpot API fetch(url, params)
The best and easier way for you to achieve this, it's by using Firestore API to retrieve values. This way, you will be able to easily access your data and return the values in a JSON format, as you want to. For you to achieve that, you need to follow a pattern of URL to retrieve the values. An example is the below URL: https://firestore.googleapis.com/v1/projects/YOUR_PROJECT_ID/databases/(default)/documents/dadosusuarios/<usuario> More information you can find in the official documentation here: Making Rest Calls However, in case you need to use the Express methods that you are using, you can check this documentation for more information: Call functions via HTTP requests And I would start by adding this below part to your code, so the authentication and CORS problem are not affecting you. // Automatically allow cross-origin requests app.use(cors({ origin: true })); // Add middleware to authenticate requests app.use(myMiddleware); Let me know if the information helped you!
When using the client-side SDKs (such as the JavaScript SDK) for Firebase Authentication, a user can only access and delete their own account. That is by design, as anything else would be a significant security risk. Firebase does provide so-called Admin SDKs for use in trusted environments, such as your development machine, a server you control, or Cloud Functions. Since these Admin SDKs run in trusted environments, they support a broader set of operations, like deleting or deactivating any user in the project based on their UID. But the Admin SDKs can only be used on trusted environments. They can't be included in the client-side application, such as in your Angular app. This means you'll typically have to take a two-stepped approach: Create a custom API/server-side endpoint (such as a Callable Cloud Function) that uses the Admin SDK to delete a user you specify. In this code you'll want to ensure that the call comes from a trusted user, for example by hard-coding the UIDs that are allowed to make such calls. Then call this endpoint from your application code.
First, the status code indicated by Forbid() is 403 instead of 401. Secondly, the Forbid() method needs to rely on the authentication stack to respond. If you don't have any authentication handlers in your pipeline, you can't use Forbid(). Instead, you should use return StatusCode(403). You can refer to this. I have made a simple demo, you can refer to it: Update ApiController: [Route("api/[controller]")] [ApiController] public class ApiControllerBase : ControllerBase { [ApiExplorerSettings(IgnoreApi = true)] protected IActionResult IsAccess(int carrierId) { if (carrierId >= 1) { ControllerContext.HttpContext.Response.StatusCode = 403; return StatusCode(403); } else { return Ok(); } } } TestBaseController : public class TestBaseController : ApiControllerBase { public IActionResult GetTabletListByGroup() { return IsAccess(55555); } } Here is the test result:
Alright. This is how we insert and select a field using AES_ENCRYPT() & AES_DECRYPT() using MySQL's default block_encryption_mode, aes-128-ecb. The block_encryption_mode variable controls the block encryption mode. The default setting is aes-128-ecb. ECB mode is useful for databases because it doesn't require an IV, and therefore there is a 1:1 ciphertext:plaintext relationship. Notice that we never really use AES_DECRYPT() to decrypt the password stored in the database. You should have zero knowledge of a user's password. We instead, encrypt a user's input attempt at a correct password. If both encrypted values match, then we have a successful login. /* ANALYSIS */ SELECT SHA2('privateKey',512); SELECT LENGTH(SHA2('privateKey',512)); SELECT UNHEX(SHA2('privateKey',512)); SELECT LENGTH(UNHEX(SHA2('privateKey',512))); /* INSERT ONWER */ INSERT INTO 01_tblCompany (ownerId, ownerPassword) VALUES ('owner001', AES_ENCRYPT('password123', UNHEX(SHA2('privateKey',512)))); /* SELECT ONWER */ SELECT ownerId, ownerPassword FROM 01_tblCompany WHERE ownerId = 'owner001' AND ownerPassword = AES_ENCRYPT('password123', UNHEX(SHA2('privateKey',512))); /* INSERT USER */ INSERT INTO 02_tblCompanyUsers (ownerId, userName, userPassword) VALUES ('owner001', 'user001', AES_ENCRYPT('password123', UNHEX(SHA2('privateKey',512)))); /* SELECT USER */ SELECT ownerId, userName, userPassword FROM 02_tblCompanyUsers WHERE userName = 'user001' AND userPassword = AES_ENCRYPT('password123', UNHEX(SHA2('privateKey',512)));
Server-side, controller/API code can never trust data coming from the client. For a controller method you will have an authenticated user associated with the session. For every request you should assert that the provided ID can be modified by the session user. API methods will include authentication tokens to assert and identify the User in order to determine if they are accessing appropriate records. If you detect that an ID is coming in that the current user does not have access to, it is an immediate event log notification to administrators about the violation (User ID, record ID, date/time, IP Address, etc.) and terminating the user's session. (Kick to login) The system should track these violations against users and repeated attempts should lock the user's account in the event that their account has been compromised. (Typically it's just a curious bumpkin wondering if something is open/possible) The same goes for any data coming back with the update request. Everything needs to be validated and only the fields that you allow to be updated should be persisted. This is the #1 reason I advise that you never pass EF entities between client and server. The payload of a request can be tampered with, so code that does an Attach + Modified or Update approach is vulnerable to data tampering.
It is said in Microsoft site that Starting December 2019, the minimum required .NET version for build agents is 4.6.2 or higher. You can check if .NET Framework 4.6.2 or higher is installed on your machine. See below prerequisites to install on-premise agent: Windows 7, 8.1, or 10 (if using a client OS) Windows 2008 R2 SP1 or higher (if using a server OS) PowerShell 3.0 or higher .NET Framework 4.6.2 or higher Check the document here for more information. The PAT token is only used during the installation of the agent. You donot need to install a new agent when the PAT is expired. See below note from microsoft document here. Note: when using PAT as the authentication method, the PAT token is only used during the initial configuration of the agent. Later, if the PAT expires or needs to be renewed, no further changes are required by the agent. Update: You can check out this link and try downloading a different version(eg. an older version) of the deployment agent package. After the deployment agent package is downloaded, create a new folder (eg. c:/mydeployagent), unzip the package to this folder, then run below command from the powershell; .\config.cmd --deploymentgroup --deploymentgroupname "your deployment group name" --agent $env:COMPUTERNAME --runasservice --work '_work' --url 'https://dev.azure.com/yourOrganization/' --projectname 'Your project Name'
Firstly you should write your Connection information in a Configuration file,for example a ldap.yml file. ldap: url: ldap://XXXXXXX:389/ root: cn=root,dc=root,dc=com userDn: cn=root,dc=root,dc=com password: XXXXXX baseDN: dc=root,dc=com clean: true pooled: false And then using these attribute inject a ldaptemplate bean,this is a propeties class: @ConfigurationProperties(prefix = "ldap") public class LdapProperties { private String url; private String userDn; private String password; private String baseDN; private String clean; private String root; private boolean pooled = false; } this is a Configuration class: @Configuration @EnableConfigurationProperties({LdapProperties.class}) public class LdapConfiguration { @Autowired LdapProperties ldapProperties; @Bean public LdapTemplate ldapTemplate() { LdapContextSource contextSource = new LdapContextSource(); contextSource.setUrl(ldapProperties.getUrl()); contextSource.setUserDn(ldapProperties.getUserDn()); contextSource.setPassword(ldapProperties.getPassword()); contextSource.setPooled(ldapProperties.getPooled()); contextSource.setBase(ldapProperties.getBaseDN()); contextSource.afterPropertiesSet(); return new LdapTemplate(contextSource); } } And then you can use @Autowired Annotations.This annotation allows Spring to resolve and inject collaborating beans into your bean. @Autowired LdapTemplate ldapTemplate; Using ldapTemplate you could do CRUD like a relational database.of course you can do authentication things. This is my first time answer questions in stackoverflow ,I welcome you to point out my mistakes.
There a couple of errors, from string comparison, string definition and char/char array variables. It looks like JS sometimes, but take care of the big differences. Beside that, I have seen another answer has been added while I was writing this one, I want to add to pay attention at the length of array in scanf so you don't raise a buffer overflow error at runtime. You can check it out here: https://onlinegdb.com/HyrgvK-sL #include <stdio.h> #include <string.h> void danidev (void) { printf ("Dani is a YouTuber and an indie game developer and an fps game developer having his game published in play store he is 22 years old and goes to a university"); } int main () { printf("HERE IS THE INFORMATION OF FAMOUS CODING YOUTUBERS(PLS TYPE THE FOLLWOING YOUTUBERS NAME): "); char b[32]; scanf("%31s", b); if (strncmp(b,"danidev", 32)== 0) { danidev (); } else { printf (" i dont know what you are taking about"); } return 0; }
The idea would be to wrap the tabs screen inside component and add it to the stack conditionally. const HomeScreen = () =>{ return ( <Bottom.Navigator initialRouteName="Dashboard" > <Bottom.Screen name="Dashboard" component={TabDashboard} /> <Bottom.Screen name="Profile" component={TabProfile} /> </Bottom.Navigator> ); } Your stack should change as below render() { return ( <NavigationContainer> <Stack.Navigator initialRouteName="Welcome" headerMode='none' > { this.state.isSignedIn ? ( <> <Stack.Screen name="Welcome" component={WelcomeScreen} /> <Stack.Screen name="Login" component={LoginScreen} /> <Stack.Screen name="Signup" component={SignupScreen} /> <Stack.Screen name="ResetPassword" component={ResetPasswordScreen} /> </> ) : ( <> <Stack.Screen name="ResetPassword" component={HomeScreen} /> </> ) } </Stack.Navigator> </NavigationContainer> ); } IsSignedIn can be the state variable or a variable that you store the logged in status You can refer the authentication flows https://reactnavigation.org/docs/auth-flow
First, I changed @Override protected void configure(AuthenticationManagerBuilder auth) throws Exception { auth.parentAuthenticationManager(authenticationManagerBean()) .userDetailsService(customUserDetailsService) .passwordEncoder(passwordEncoder()); } to @Override protected void configure(AuthenticationManagerBuilder auth) throws Exception { auth. .userDetailsService(customUserDetailsService) .passwordEncoder(passwordEncoder()); } Second, I encrypted the password in the Database today -> $2a$04$TwPX14ddISSYsW4/fvzxfu8uSyJQXq415OrlWwrLACxBycRmdS07u I made my curl request with no encrypted password: curl -i -v -X POST -H 'Content-Type: application/x-www-form-urlencoded' -k http://localhost:8080/oauth/token -H 'Authorization: Basic Y2xpZW50OnNlY3JldA==' -d 'grant_type=password&client_id=client&username=emoleumassi&password=today&scope=write' Does someone know the influence of parentAuthenticationManager?
I think I figured it out. I moved my added code to a separate .then, and made sure the return for the one before it doesn't use res.json, but just returns the object. I then return that object in the new .then using res.json whether the user table update succeeds or fails, thus: api.post('/login', passport.authenticate('ldapauth', {session: true, failureFlash: 'Invalid username or password.', failWithError: true}), (req, res, next) => { // this is only called if authentication was successful. this is called after the serializer. // the authenticated user is found in req.user. let adGroups = req.user.memberOf; let office = req.user.physicalDeliveryOfficeName; let company = req.user.company; let department = req.user.department; // console.log('Member Of', adGroups); if (typeof req.user.sessionID !== 'undefined') { winston.info(`\"Login Successful for \"${req.user.displayName}\" (${req.user.sAMAccountName})\" - SessionID: ${req.user.sessionID} ${req.ip} \"${req.method} ${req.originalUrl} HTTP/${req.httpVersion}\" \"${req.headers['referer']}\" \"${req.headers['user-agent']}\" \"${req.headers['content-length']}\"`, {username: req.user.sAMAccountName, sessionID: req.user.sessionID, ip: req.ip, referrer: req.headers['referer'], url: req.originalUrl, query: req.method, route: 'Authentication'}); } else { winston.info(`\"Login Successful for \"${req.user.displayName}\" (${req.user.sAMAccountName})\" - SessionID: ${req.user.dataValues.sid} ${req.ip} \"${req.method} ${req.originalUrl} HTTP/${req.httpVersion}\" \"${req.headers['referer']}\" \"${req.headers['user-agent']}\" \"${req.headers['content-length']}\"`, {username: req.user.sAMAccountName, sessionID: req.user.dataValues.sid, ip: req.ip, referrer: req.headers['referer'], url: req.originalUrl, query: req.method, route: 'Authentication'}); } // console.log('Req user: ', req.user); // console.log('right here 1') let tempAbilities = []; let userAbilities = []; let allAbilities = displayAbilities(); let include = [{ model: Roles, where: { active: 1 }, include: [ RolesPerms ] }]; let options = { where: { username: req.user.sAMAccountName }, include: include }; let userID = null; // TODO check if found user is 'active === 0' and if so, fail the authentication; use where clause "active != 0" // console.log('right here 2') Users.findOne(options).then(userResult => { // console.log('user result', userResult); if (userResult === null) { if (typeof req.user.sessionID !== 'undefined') { winston.info(`\"Login user not found - looking for group for \"${req.user.displayName}\" (${req.user.sAMAccountName})\" - SessionID: ${req.user.sessionID} ${req.ip} \"${req.method} ${req.originalUrl} HTTP/${req.httpVersion}\" \"${req.headers['referer']}\" \"${req.headers['user-agent']}\" \"${req.headers['content-length']}\"`, {username: req.user.sAMAccountName, sessionID: req.user.sessionID, ip: req.ip, referrer: req.headers['referer'], url: req.originalUrl, query: req.method, route: 'Authentication'}); } else { winston.info(`\"Login user not found - looking for group for \"${req.user.displayName}\" (${req.user.sAMAccountName})\" - SessionID: ${req.user.dataValues.sid} ${req.ip} \"${req.method} ${req.originalUrl} HTTP/${req.httpVersion}\" \"${req.headers['referer']}\" \"${req.headers['user-agent']}\" \"${req.headers['content-length']}\"`, {username: req.user.sAMAccountName, sessionID: req.user.dataValues.sid, ip: req.ip, referrer: req.headers['referer'], url: req.originalUrl, query: req.method, route: 'Authentication'}); } throw new Error('User not in SutterNow DB.') } userID = userResult.dataValues.id; userResult.roles.forEach(rElement => { rElement.roles_perms.forEach(rpElement => { // console.log('rpElement', rpElement); // if (abilities.findIndex(x => x.prop=="propVal")) --> an example of using an object property to find index if (tempAbilities.indexOf(rpElement.dataValues.permission_name) === -1) { tempAbilities.push(rpElement.dataValues.permission_name); userAbilities.push(allAbilities[allAbilities.findIndex(x => x.name === rpElement.dataValues.permission_name)]); } }) }); req.session.rules = userAbilities; let location = { office: office, company: company, department: department } req.session.location = location let adLocation = { office: userResult.dataValues.office, company: userResult.dataValues.company, department: userResult.dataValues.department } return {id: userID, rules: userAbilities, location: location, adLocation: adLocation, "Message": "Login Successful"}; }).then(result => { if (result.adLocation.office !== result.location.office || result.adLocation.department !== result.location.department || result.adLocation.company !== result.location.company) { // update the database with the new AD values Users.update({ // values office: office, department: department, company: company }, { // options where: { id: userID } }).then(numAffected => { winston.info(`\"Updated ${numAffected} user for ${req.user.dataValues.username}\" - SessionID: ${req.sessionID} ${req.ip} \"${req.method} ${req.originalUrl} HTTP/${req.httpVersion}\" \"${req.headers['referer']}\" \"${req.headers['user-agent']}\" \"${req.headers['content-length']}\"`, {username: req.user.dataValues.username, sessionID: req.sessionID, ip: req.ip, referrer: req.headers['referer'], url: req.originalUrl, query: req.method, route: 'User Admin'}); return res.json(result); }).catch((err) => { if (err) { // Not sure how to get an error here. ensureAuthenticated handles invalid users attempting this PUT. // console.log(err); err.route = 'User Admin'; err.statusCode = 'UPDATE_USER_AT_LOGIN_BY_ID_ERROR'; err.status = 'UPDATE USER AT LOGIN BY ID ERROR'; err.shouldRedirect = req.headers['user-agent'].indexOf('Postman') > -1; if (app.get('env') === 'production') { console.log('stack redacted'); err.stack = '';// We want to obscure any data the user shouldn't see. } next(err); } }); } else { return res.json(result); } }).catch(err => { // console.log('ah crap') if (typeof req.user.sessionID !== 'undefined') { winston.info(`\"Looking for group for \"${req.user.displayName}\" (${req.user.sAMAccountName})\" - SessionID: ${req.user.sessionID} ${req.ip} \"${req.method} ${req.originalUrl} HTTP/${req.httpVersion}\" \"${req.headers['referer']}\" \"${req.headers['user-agent']}\" \"${req.headers['content-length']}\"`, {username: req.user.sAMAccountName, sessionID: req.user.sessionID, ip: req.ip, referrer: req.headers['referer'], url: req.originalUrl, query: req.method, route: 'Authentication'}); } else { winston.info(`\"Looking for group for \"${req.user.displayName}\" (${req.user.sAMAccountName})\" - SessionID: ${req.user.dataValues.sid} ${req.ip} \"${req.method} ${req.originalUrl} HTTP/${req.httpVersion}\" \"${req.headers['referer']}\" \"${req.headers['user-agent']}\" \"${req.headers['content-length']}\"`, {username: req.user.sAMAccountName, sessionID: req.user.dataValues.sid, ip: req.ip, referrer: req.headers['referer'], url: req.originalUrl, query: req.method, route: 'Authentication'}); } const promA = Groups.findAll({ where: { active: 1 }, include: [{ model: Roles, where: { active: 1 }, include: [ RolesPerms ] }], order: ['displayName'] }); const promB = GroupsRoles.findAll(); // console.error(err); // user not found in our DB, but they're in AD; let's check if they belong to an approved group Promise.all([promA, promB]).then(responses => { // console.log('Response 1', responses[0]); // console.log('Response 2', responses[1]); if (typeof req.user.sessionID !== 'undefined') { winston.info(`\"Found group results for \"${req.user.displayName}\" (${req.user.sAMAccountName})\" - SessionID: ${req.user.sessionID} ${req.ip} \"${req.method} ${req.originalUrl} HTTP/${req.httpVersion}\" \"${req.headers['referer']}\" \"${req.headers['user-agent']}\" \"${req.headers['content-length']}\"`, {username: req.user.sAMAccountName, sessionID: req.user.sessionID, ip: req.ip, referrer: req.headers['referer'], url: req.originalUrl, query: req.method, route: 'Authentication'}); } else { winston.info(`\"Found group results for \"${req.user.displayName}\" (${req.user.sAMAccountName})\" - SessionID: ${req.user.dataValues.sid} ${req.ip} \"${req.method} ${req.originalUrl} HTTP/${req.httpVersion}\" \"${req.headers['referer']}\" \"${req.headers['user-agent']}\" \"${req.headers['content-length']}\"`, {username: req.user.sAMAccountName, sessionID: req.user.dataValues.sid, ip: req.ip, referrer: req.headers['referer'], url: req.originalUrl, query: req.method, route: 'Authentication'}); } let foundGroup = '' let foundRoles = [] let foundPerms = [] responses[0].forEach(el => { // console.log('our el', el) // console.log('our group', el.dataValues) for (let j = 0; j < adGroups.length; j++) { // console.log('adGroups j', adGroups[j]) if (adGroups[j].match(el.dataValues.username)) { // console.log('found it', el.dataValues) userID = el.id foundGroup = el.dataValues.username; foundRoles = el.dataValues; break // TODO allow for membership in multiple groups, like I do multiple roles, below } } }); foundRoles.roles.forEach(role => { // console.log('roles_perms things', role.roles_perms); role.roles_perms.forEach(roleP => { if (foundPerms.indexOf(roleP.dataValues.permission_name) === -1) { foundPerms.push(roleP.dataValues.permission_name); userAbilities.push(allAbilities[allAbilities.findIndex(x => x.name === roleP.dataValues.permission_name)]); } }) }); req.session.rules = userAbilities; let location = { office: office, company: company, department: department } req.session.location = location return res.json({id: 0, rules: userAbilities, location: location, "Message": "Login Successful via group membership"}); }).catch(err => { console.error('the error ' + err); return res.json({"Message": "Login failed. You neither had a user account in SutterNow, nor did you belong to a valid AD Group."}) }); }); return null; // res.json({"Message":"Login successful"}); }, (err, req, res, next) => { /* failWithErrors: true, above, makes this section get called to handle the error. We don't handle the logging nor the json return here; instead, we setup the error object and pass it on to the error handler which does those things. */ console.log('Authentication failed; passing error on to error handler...'); err.route = 'Authentication'; err.statusCode = 401; err.status = 401; err.shouldRedirect = req.headers['user-agent'].indexOf('Postman') > -1; if (typeof req.flash === 'function' && typeof req.flash('error') !== 'undefined' && req.flash('error').length !== 0) { err.message = req.flash('error').slice(-1)[0]; console.log('Flash error ' + req.flash('error').slice(-1)[0]); res.statusMessage = req.flash('error').slice(-1)[0]; } if (app.get('env') === 'production') { console.log('stack redacted'); err.stack = '';// We want to obscure any data the user shouldn't see. } next(err); // return res.json({"Message": "Login failed. " + err.message}); // return null; } );
TL;DR: NET Core is doing a lot to fight you on this approach under the hood. Not entirely an answer on what to do, but hopefully helpful background on the HttpClientFactory approach, based on my understanding of the components. First, from the ASP NET Core docs in regards to impersonation: ASP.NET Core doesn't implement impersonation. Apps run with the app's identity for all requests, using app pool or process identity. If the app should perform an action on behalf of a user, use WindowsIdentity.RunImpersonated in a terminal inline middleware in Startup.Configure. Run a single action in this context and then close the context. RunImpersonated doesn't support asynchronous operations and shouldn't be used for complex scenarios. For example, wrapping entire requests or middleware chains isn't supported or recommended. As you call out, there's a lot of progress NET Core has made around how HttpClient instances are handled to resolve socket exhaustion and the expensive operations around the underlying handlers. First, there's HttpClientFactory, which in addition to supporting creating named/typed clients with their own pipelines, also attempts to manage and reuse a pool of primary handlers. Second, there's SocketsHttpHandler, which itself manages a connection pool and replaces the previous unmanaged handler by default and is actually used under the hood when you create a new HttpClientHandler. There's a really good post about this on Steve Gordon's Blog: HttpClient Connection Pooling in NET Core. As you're injecting instances of HttpClient around from the factory, it becomes way safer to treat them as scoped and dispose of them because the handlers are no longer your problem. Unfortunately, all that pooling and async-friendly reuse makes your particular impersonation case difficult, because you actually need the opposite: synchronous calls that clean up after themselves and don't leave the connection open with the previous credentials. Additionally, what used to be a lower-level capability, HttpWebRequest now actually sits on top of HttpClient instead of the other way around, so you can't even skip it all that well by trying to run the requests as a one off. It might be a better option to look into using OpenID Connect and IdentityServer or something to centralize that identity management and Windows auth and pass around JWT everywhere instead. If you really need to just "make it work", you might try at least adding some protections around the handler and its connection pooling when it comes to the instance that is getting used to make these requests; event if the new clients per request are working most of the time, deliberately cleaning up after them might be safer. Full disclaimer, I have not tested the below code, so consider it conceptual at best. (Updated Switched the static/semaphore to a regular instance since the last attempt didn't work) using (var handler = new SocketsHttpHandler() { Credentials = CredentialCache.DefaultCredentials, PooledConnectionLifetime = TimeSpan.Zero, MaxConnectionsPerServer = 1 }) using (var client = new HttpClient(handler, true)) { return client.GetStringAsync(uri).Result; }
According to what you have provided, definitely there will be a delay. I'll explain what is happening here. You are requesting data from the firebase after you have rendered the details on Profile. This happens because you are requesting data in componentDidMount method. This method gets called first time Render method is completely finished rendering your components. So I'll suggest you two methods to get rid of that. As Coding Duck suggested, you can show a skeleton loader until you fetch data from the firebase. You can request these data from your login. That means, if user authentication is success, you can request these data using fetchProfile action and once you fetch these data completely, you can use Navigation.navigate('Profile') to navigate to your Profile screen rather than directly navigate to it once the authentication is success. In that time since you have fetched data already, there will be no issue. Also you can use firebase persist option to locally store these data. So even if there were no internet connection, still firebase will provide your profile information rapidly. EDIT More specific answer with some random class and function names. This is just an example. Let's say onLogin function handles all your login requirements in your authentication class. onLogin = () => { /** Do the validation and let's assume validation is success */ /** Now you are not directly navigating to your Profile page like you did in the GIF. I assume that's what you did because you have not added more code samples to fully understand what you have done.*/ /** So now you are calling the fetchProfile action through props and retrieve your details. */ this.props.fetchProfile(this.props.navigation); }; Now let's modify your fetchDetails action. export const fetchProfile = (navigation) => { const {currentUser} = firebase.auth(); return (dispatch) => { firebase .database() .ref(`/users/${currentUser.uid}/profile`) .on('value', (snapshot) => { dispatch({ type: PROFILE_FETCH, payload: snapshot.val(), }); navigation.navigate('Profile') }); }; }; Note : This is not the best method of handling navigations but use a global navigation service to access directly top level navigator. You can learn more about that in React Navigation Documentation. But let's use that for now in this example. So as you can see, when user login is successful, now you are not requesting data after rendering the Profile page but request data even before navigating to the page. So this ensures that profile page is only getting loaded with relevant data and there will be no lag like in your GIF.
I am not sure if your snippet actually contains all relevant details of the problematic situation, but if you add a max-width to .column (adjust value as desired) and text-align: center; to all of its children (using .column * as a selector) , it displays as desired: .flexbox { display: flex; flex-direction: row; flex-wrap: wrap; max-width: 100%; justify-content: space-around; } .column { display: flex; flex-direction: column; flex: 1; max-width: 240px; } .column * { text-align: center; } <section class="flexbox"> <div class="left-side column"> <div class="one column-container"> <img class="feature-img" src="images/icon-access-anywhere.svg" alt="icon 1"> <h3> Access your files, anywhere </h3> <p class="features-para"> The ability to use a smartphone, tablet, or computer to access your account means your files follow you everywhere</p> </div> <div class="two column-container"> <img class="feature-img" src="images/icon-security.svg" alt="icon 2"> <h3> Security you can trust </h3> <p class="features-para">2-factor authentication and user-controlled encryption are just a couple of the security features we allow to help secure your files.</p> </div> </div> <div class="right-side column"> <div class="three column-container"> <img class="feature-img" src="images/icon-collaboration.svg" alt="icon 3"> <h3> Real-time collaboration </h3> <p class="features-para">Securely share files and folders with friends, family and colleagues for live collaboration. No email attachments required.</p> </div> <div class="four column-container"> <img class="feature-img" src="images/icon-any-file.svg" alt="icon 4"> <h3> Store any type of file </h3> <p class="features-para"> Whether you're sharing holidays photos or work documents, Fylo has you covered allowing for all file types to be securely stored and shared.</p> </div> </div> </section>
The "connection refused" message almost always resolve to one these two possibilities: 1. The server or service is unstable or unavailable. In this situation there might be a few things going on the server: Options 1.1 - Incorrect port / listener is down The port you are using is correct but the listener (process on the servers) is down, meaning that you are connecting to the correct port, but there's no process to listen for your requests. Possible solution is to try running telnet <address> <port> and check if the service is up. Example: telnet google.com 443. Option 1.2 - Threshold limit The port and the service are up, but you've reached the threshold that limits the configured TCP connectivity. That might occur due to high traffic (peaks) to the endpoint. That is not something you can solve yourself. TCP listeners might reject the caller connection if the traffic is too high. One way of testing these is to implement a load testing script that tests the connectivity over time. If you prove that the server is limiting the requests you can then report and ask them to increase the load capabilities (allow for a higher capacity for simultaneous requests). 2. The client cannot communicate with the server Option 2.1 - Proxy If you are connection from an enterprise network and there is a proxy in between, that might be why you are having such difficulties. Solution: run your code from home or an external network to prove that you are able to connect from outside the corporate network. Option 2.2 - Firewall Just like the proxy, you're running your code behind a firewall that is either blocking or interfering with your communication with the external network. I was able to run your code and connect to Google using my personal credentials. I had to perform a slight change due to a problem to the Deno library due to type definitions but it worked fine. I strongly suspect that your problem is related to the infrastructure (local or remote), not to the Deno runtime or library. import { SmtpClient } from "https://deno.land/x/smtp/mod.ts"; import { ConnectConfigWithAuthentication } from "https://raw.githubusercontent.com/manyuanrong/deno-smtp/master/config.ts"; const client = new SmtpClient(); const params = <ConnectConfigWithAuthentication>{ hostname: "smtp.google.com", port: 465, username: "<google mail>", password: "<password>" } await client.connectTLS(params); await client.send({ from: "<from_email>", // Your Email address to: "<to_email>", // Email address of the destination subject: "Mail Title", content: "Mail Content,maybe HTML", }); await client.close(); I tried to create a user on the smtp.163.com website but I couldn't understand the language, if you give me test credentials I try myself.
Your code has some unrelated problems. I changed it a little and pyinstaller works as expected import subprocess import smtplib #from smtplib import * Here you try to import smtplib again #import re # Not needed for this example # command1 = "netsh wlan show profile" # networks = subprocess.check_output(command1, shell=True) # network_list = re.findall('(?:Profile\s*:\s)(.*)', networks.decode()) # # final_output = "" # for network in network_list: # command2 = "netsh wlan show profile " + network + " key=clear" # a_network_result = subprocess.check_output(command2, shell=True) # final_output += a_network_result.decode() # final_output = 'cmd output' # Simulate the cmd output you want emailed fromMy = 'myemail' to = 'myEmail' subj = 'TheSubject' date = '23/5/2020' message_text = final_output msg = r"From: %s\nTo: %s\nSubject: %s\nDate: %s\n\n%s" % ( fromMy, to, subj, date, message_text ) # Use regular strings in the variables python implicitly types them username = 'MyEmail' password = 'MyPasswd' try: # Will always fail without real credentials server = smtplib.SMTP("smtp.gmail.com", 587) server.starttls() server.login(username, password) server.sendmail(fromMy, to, msg) server.quit() except smtplib.SMTPAuthenticationError: print("The username or password were incorrect") Worked for me under windows both python 3.7 and 3.8 Copy my code then run pyinstaller <script name>.py and see if the same error is raised again. note: You need to run <script name>.exe from the terminal in the dist\<script name> path not build\<script name> edit Additional troubleshooting steps: Try python from the python site not Microsoft store Try a different environment (if you used venv or virtualenv try the system interpreter) Debug your script and make sure it can be run by python before you try to package it with pyinstaller
You could use Polly to add a policy handler to your client. You can then add logic if a request returns a 401 Unauthorized. So for example get your service that uses the client to refresh a bearer token and also set it for the current request. This is just a quick solution and maybe there are more elegant solutions. But this will also come in handy if your token expires. Cause then it will be refreshed automatically. services.AddHttpClient("YourClient") .AddPolicyHandler((provider, request) => { return Policy.HandleResult<HttpResponseMessage>(r => r.StatusCode == HttpStatusCode.Unauthorized) .RetryAsync(1, async (response, retryCount, context) => { var service = provider.GetRequiredService<IYourService>(); request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", await service.RefreshToken()); }); });
Note: Azure Data Lake Storage Gen1 protects your data throughout its life cycle. For data in transit, Data Lake Storage Gen1 uses the industry-standard Transport Layer Security (TLS 1.2) protocol to secure data over the network. Encryption in Azure Data Lake Storage Gen1 helps you protect your data, implement enterprise security policies, and meet regulatory compliance requirements. Data Lake Storage Gen1 supports encryption of data both at rest and in transit. For data at rest, Data Lake Storage Gen1 supports "on by default," transparent encryption. Data Lake Storage Gen1 also provides encryption for data that is stored in the account. You can chose to have your data encrypted or opt for no encryption. If you opt in for encryption, data stored in Data Lake Storage Gen1 is encrypted prior to storing on persistent media. In such a case, Data Lake Storage Gen1 automatically encrypts data prior to persisting and decrypts data prior to retrieval, so it is completely transparent to the client accessing the data. There is no code change required on the client side to encrypt/decrypt data. Reference: Encryption of data in Azure Data Lake Storage Gen1
Ok, I changed @ManagedBean annotation to @Component, link AuctionViewService with @Autowire, and it works for me :) but how can I explain this situation? package application.beans; import application.model.views.AuctionView; import application.service.AuctionViewService; import lombok.Getter; import lombok.Setter; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.security.core.context.SecurityContextHolder; import org.springframework.stereotype.Component; import javax.annotation.PostConstruct; import javax.faces.bean.ManagedBean; import javax.faces.bean.ViewScoped; import java.io.Serializable; import java.util.ArrayList; import java.util.List; @Setter @Getter @ViewScoped @Component //@ManagedBean(name="SalesBean") public class SalesBean implements Serializable { @Autowired private AuctionViewService auctionViewService; //private String userName = SecurityContextHolder.getContext().getAuthentication().getName(); private String userName = "[email protected]"; private String firstName = "first"; private String lastName = "last"; private List<AuctionView> userProperty = new ArrayList<>(); @PostConstruct public void init() { userProperty = auctionViewService.findByEmail(userName); } }
It is behaving that way - expecting a user to authenticate - because you are using OAuth2 authentication which is meant for scenarios where a user grants your program access to their spreadsheet. For your scenario you want to use the other type of authentication: service account. Actually, you can use a service account with OAuth, as described here: Using OAuth 2.0 for Server to Server Applications https://developers.google.com/identity/protocols/oauth2/service-account And you could consider this because the code examples tend to use OAuth, but I'm suggesting that you instead go back to the sheets API in the console: https://console.cloud.google.com/apis/api/sheets.googleapis.com/credentials?project=yourProjectId And click 'create credentials' and choose a service account instead of OAuth. This method has a simpler authentication flow since it is meant for your scenario: no end-user. Also, I haven't done this in a while, but I think I took the service account client_email (the email address of the account - the main identifier) and give it permission to access the sheet, ie. share your sheet with it. Finally, I would note - and you probably already know this - but you mention using sheets like a database and it doesn't have the performance characteristics for this. It is more like a CRM: eg. suitable for a building a static site.
UPDATE You can follow the offical document to set in portal. I have try it and sucessed. Create SQL managed instances (maybe cost long time) Configure Active Directory admin Configure your db When u have finished it, you can find connection string like pic. You just copy and paste it in your code. It works for me. Connection strings like below Server=tcp:panshubeidb.database.windows.net,1433;Initial Catalog=dbname;Persist Security Info=False;User ID={your_username};Password={your_password};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Authentication='Active Directory Password'; PRIVIOUS Your SQL Connectionstrings should be like Server=tcp:testdb.database.windows.net,1433;Initial Catalog=test;Persist Security Info=False;User ID=sasasa;Password={your_password};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30; . You can find it in portal like pic. You also can configure your connectionstring's name in web.config. <connectionStrings> <add name="DefaultConnection" connectionString="You local db connnection strings or others" /> <add name="DefaultConnection11" connectionString="Data Source =**;Initial Catalog = {your db in server not azure};User Id = {userid};Password ={password};" /> </connectionStrings> You can configure your Connectionstrings like the code I given. And when you want to depoly your apps. You can switch to your production database, and don't need change anything in your code. Fore more details, you can see this article . Its priority is higher than the configuration in web.config, and it will cover the address in the code, so after setting it here, you do not need to modify your web.config file when deploying.
I tried porting to c #, but there were problems with the decrypt private static encryptPassword(string password, string encryptionPubKey, string encryptionKeyId) { byte[] passwordAsByte = Encoding.ASCII.GetBytes(password); byte[] data = Convert.FromBase64String(encryptionPubKey); string decoededPubKey = Encoding.UTF8.GetString(data); decoededPubKey = decoededPubKey.Replace("-----BEGIN PUBLIC KEY-----", ""); decoededPubKey = decoededPubKey.Replace("-----END PUBLIC KEY-----", ""); SecureRandom random = new SecureRandom(); byte[] randKey = new byte[32]; random.NextBytes(randKey); byte[] iv = new byte[12]; random.NextBytes(iv); //String date = String.valueOf(new Date().getTime() / 1000); long tsLong = DateTimeOffset.UtcNow.ToUnixTimeSeconds() / 1000; string date = tsLong.ToString(); var header = new MemoryStream(2); header.Write(BitConverter.GetBytes(Convert.ToInt32(1))); header.Write(BitConverter.GetBytes(Convert.ToInt32(int.Parse(encryptionKeyId)))); MemoryStream timeAAD = new MemoryStream(10); timeAAD.Write(Encoding.ASCII.GetBytes(date)); ////////////////////////////////////////////////////////////////// X509EncodedKeySpec publicSpec = new X509EncodedKeySpec(Base64.decode(decoededPubKey, Base64.NO_WRAP)); KeyFactory keyFactory = KeyFactory.getInstance("RSA"); PublicKey publicKey = keyFactory.generatePublic(publicSpec); Cipher rsaCipher = Cipher.getInstance("RSA/ECB/PKCS1Padding"); rsaCipher.init(Cipher.ENCRYPT_MODE, publicKey); byte[] rsaEncrypted = rsaCipher.doFinal(randKey); MemoryStream sizeBuff = new MemoryStream(2); sizeBuff.order(ByteOrder.LITTLE_ENDIAN); sizeBuff.putChar((char) rsaEncrypted.Length); /////////////////////////////////////////////////////////////////////// AeadParameters parameters = new AeadParameters(new KeyParameter(randKey), 128, iv, timeAAD.ToArray()); GcmBlockCipher gcmEngine = new GcmBlockCipher(new AesFastEngine()); gcmEngine.Init(true, parameters); byte[] gcmText = new byte[gcmEngine.GetOutputSize(passwordAsByte.Length)]; int len = gcmEngine.ProcessBytes(passwordAsByte, 0, passwordAsByte.Length, gcmText, 0); gcmEngine.DoFinal(gcmText, len); byte[] encPass = Arrays.CopyOfRange(gcmText, 0, gcmText.Length - 16); byte[] authTag = Arrays.CopyOfRange(gcmText, gcmText.Length - 16, gcmText.Length); var a = header.Position + iv.Length + sizeBuff.Position + rsaEncrypted.Length + authTag.Length + encPass.Length; MemoryStream result = new MemoryStream(a); result.Write(header.ToArray()); result.Write(iv); result.Write(sizeBuff.ToArray()); result.Write(rsaEncrypted); result.Write(authTag); result.Write(encPass); //return new Pair(Convert.ToInt64(date), Base64.encodeToString(result.array(), Base64.NO_WRAP)); } This is almost ready code, if there are corrections, write in the comments, I will fix it
if you check here: https://docs.microsoft.com/en-us/azure/active-directory/develop/access-tokens in the newer access tokens, by default it follows the standard of not having any personal information about the user. which is what the standards suggest, Access tokens should not contain much or any information About the user. rather just information for authorization (things that user can access). However you can always add the optional claims (and azure lets you do it) for personal info, but best practice suggests you shouldn't. in terms of the addauthentication: Authentication is basically proving who you say you are. addauthentication(), basically calls microsoft azure ad to perform this task, saying hey aad please ask this person who he is, azure then checks and says ya this is a real person, but i won't tell you anything about him other than an id, and they have access to your api/application. So from your snippit, it's fine. at this point, your serverside shouldn't have any personal information about the user, just that they have access and what scopes/roles. If it wants info about the user, it should then take that authorization (access token) and request it from whatever endpoint that token has access to (graph) here's a great read about it: https://auth0.com/blog/why-should-use-accesstokens-to-secure-an-api/ Hopefully this helped clarify somewhat, and not add more confusion to the issue.
If they are well-programmed, Prolog predicates can run "backwards": f(X,Y) should be read as X is related to Y via f. Given an x, on can compute the Y (possibly several Y via backtracking): f(x,Y) is interpreted as The set of Y such that: f(x) = Y. Given a y, on can compute the X (possibly several X via backtracking): f(X,y) is interpreted as The set of X such that: f(X) = y. Given an (x,y), on can compute the truth value: f(x,y) interpreted as true if (but not iff) f(x) == y. (How many Prolog predicates are "well-programmed"? If there is a study about how many Prolog predicates written outside of the classroom can work bidirectionally I would like to know about it; my guess is most decay quickly into unidirectional functions, it's generally not worth the hassle of adding the edge cases and the test code to make predicates work bidirectionally) The above works best if f is bijective, i.e.: "no information is thrown away when computing in either direction" the computation in both directions is tractable (having encryption work backwards is hard) So, in this case: delete(Element,ListWith,ListWithout) relates the three arguments (Element,ListWith,ListWithout) as follows: ListWithout is ListWith without Element. Note that "going forward" from (Element,ListWith) to ListWithout destroys information, namely the exact position of the deleted element, or even if there was one in the first place. BAD! NOT BIJECTIVE! To make delete1/3 run backwards we just have to: ?- delete1(56,L,[a,b,c]). L = [56, a, b, c] ; L = [a, 56, b, c] ; L = [a, b, 56, c] ; L = [a, b, c, 56] ; There are four solutions to the reverse deletion problem. And the program misses one: L = [a, b, c] or even a few more: L = [56, a, b, 56, c] etc. As you can see, it is important to retain information!
Oracle's default IV is NULL according to: https://docs.oracle.com/cd/B28359_01/appdev.111/b28419/d_crypto.htm#i1002112 .NET will not allow a NULL IV, so change the mode to ECB. Modifying your code to the below returns the value expected as per your question: public static string EncryptAES(string text) { AesCryptoServiceProvider aes = new AesCryptoServiceProvider(); aes.Key = Encoding.UTF8.GetBytes("12345678901234567890123456789012"); //aes.IV = Encoding.UTF8.GetBytes("0123456789ABCDEF"); - Remove aes.Mode = CipherMode.ECB; // Change mode to ECB aes.Padding = PaddingMode.Zeros; // Convert string to byte array byte[] src = Encoding.UTF8.GetBytes(text); // encryption using (ICryptoTransform encrypt = aes.CreateEncryptor()) { byte[] dest = encrypt.TransformFinalBlock(src, 0, src.Length); StringBuilder hex = new StringBuilder(dest.Length * 2); foreach (byte b in dest) hex.AppendFormat("{0:x2}", b); Console.WriteLine(hex); return hex.ToString() ; } }
A couple things I can see : As far as I could see you didn't configure passportjs. You need to have a configuration file, which for you would be the controllers/auth.js. To configure passport you need to run require('./controllers/auth')(passport); in app.js. For passport to be able to ingest that config you need to export them as a function that takes passport e.g. module.exports = passport => {passport.use('facebook')} Your config file (in exports.subscribe) is not a format that passport will understand. Follow the Documentation on how to create that config file. Passport provides you with authentication middleware, I am pretty sure that you cannot create "wrappers" for them like in controllers/auth.js. To access passport's auth functions you use passport.authenticate('facebook', callback())(req, res, next) in routes/users.js. Passport only provides middleware to serialize and deserialize users. Your deserialization is not yet set up. You need a call to the db to fetch the user from session store.
This is more a comment than an answer but it looks like you're new so maybe I can help (at the risk of being totally wrong). I'm working on configuring Nginx as a reverse proxy for Apache to run a Django application (By the way, I have never heard of anyone using Apache as a reverse proxy for Nginx). But there's some discussion in the Nginx documentation that makes me think it might not even be possible: Reading through the Nginx docs on proxy_pass they mention how using websockets for proxying requires a special configuration In that document it talks about how websockets requires HTTP1.1 (and an Upgrade and Connection HTTP header) so you can either basically fabricate them proxy_set_header between your server and proxy or you can pass them along if the client request includes them. Presumably in this case, if the client didn't send the Upgrade header, you'd proxy_pass the connection to a server using TCP, rather than websockets. Well, HTTP2 is (I assume) another Upgrade and one likely to have even less user agent support. So at the least, you'd risk compatibility problems. Again, I have never configured Apache as a (reverse) proxy but my guess from your configuration is that Apache would handle the encryption to the client, then probably have to re-encrypt for the connection to Nginx. That's a lot of overhead just on encryption, and probable browser compatibility issues...and probably not a great set up. Again, I'm not sure if it's possible, I came here looking for a similar answer but you might want to reconsider.
Please follow these steps in order to correctly set up the Cloud Storage Client Library for Python. In general, the Cloud Storage Libraries can use Application default credentials or environment variables for authentication. Notice that the recommended method to use would be to set up authentication using environment variables (i.e if you are using Linux: export GOOGLE_APPLICATION_CREDENTIALS="/path/to/[service-account-credentials].json" should work) and avoid the use of the service_account.Credentials.from_service_account_info() method altogether: from google.cloud import storage storage_client = storage.Client(project='project-id-where-the-bucket-is') bucket_name = "your-bucket" bucket = client.get_bucket(bucket_name) should simply work because the authentication is handled by the client library via the environment variable. Now, if you are interested in explicitly using the service account instead of using service_account.Credentials.from_service_account_info() method you can use the from_service_account_json() method directly in the following way: from google.cloud import storage # Explicitly use service account credentials by specifying the private key # file. storage_client = storage.Client.from_service_account_json( '/[service-account-credentials].json') bucket_name = "your-bucket" bucket = client.get_bucket(bucket_name) Find all the relevant details as to how to provide credentials to your application here.
This Google Support page states that sign in via browsers that "Use automation testing frameworks" is being disabled for the following security reasons and Google advices to do "Sign in with Google" using browser-based OAuth 2.0 authentication service. As some websites, like stackoverflow.com allow you to sign in to their services using "Sign in with Google" it must happen via Google OAuth 2.0 authentication. This implicates that doing so you are also indirectly signing in to your Google account and therefore you can use all the Google services. So you can fully automatically sign in to your Google account, e.g. by using a Python script, by performing these actions in your code: Open a new browser window that is controlled by selenium webdriver In the same window load the StackOverflow login page (or any other site that uses "Sign in with Google") Choose for "Log in with Google" Provide your Google account credentials and login to StackOverflow Load the Google mailbox by opening https://mail.google.com/ or https://www.gmail.com/ This way you land down in your Gmail mailbox without performing any manual actions. Please remember to add some 5s delays between different actions as doing it too quickly or too frequently can be recognized by StackOverflow as malicious automated actions and you can get blocked and you will need to make the manual I'm not a robot verification
even if Microsoft wanted a guid why an nvarchar(450) The length may seem a bit arbitrary but SQL server has a max key size of 900 bytes. That comes down to 450 unicode chars. So it basically is an "we made it as long as possible, you can use it as you like" kind of offering. The var in nvarchar means a key only occupies as much space as is actually needed, 450 is the max length. Changing it to for example an nvarchar(36) would save you zero space and time. In theory you could use something else than a string/nvarchar, like class ApplicationUser : IdentityUser<Guid> but that would ripple out through a lot of related classes. And I know that string is nailed down for TKey in some library parts so you would have to replace those. A lot of work. why the is the email address / username not encrypted by default. Encryption in/by your app would be of little use, where would you store that encryption key? If GDPR is a concern then you can configure encryption at the database level where it belongs. But when you want to go DIY, the relevant columns are marked with [PersonalData] , you could hook in on that.
In my IntelliJ IDEA with Android 10.0 (API-Level 29, Rev. 4) your decryptXLSX-method works like expected so it looks like that your Android version is lower and does not support an internal method or crypto algorithm. Maybe you could check the underlying Java version and present it to us. You can do this with: System.out.println("\nJava version:"); String[] javaVersionElements = System.getProperty("java.runtime.version").split("\\.|_|-b"); String discard, major, minor, update, build; discard = javaVersionElements[0]; major = javaVersionElements[1]; minor = javaVersionElements[2]; update = javaVersionElements[3]; build = javaVersionElements[4]; System.out.println("discard: " + discard + " major: " + major + " minor: " + minor + " update: " + update + " build: " + build); (Runtime.version isn't running with my Android-build). My output is: Java version: discard: 11 major: 0 minor: 5+10 update: 520 build: 17 I didn't check if the the XLSX-encryption needs the unlimited cryptography but just in case you can check that with some codelines: /** * Determines if cryptography restrictions apply. * Restrictions apply if the value of {@link Cipher#getMaxAllowedKeyLength(String)} returns a value smaller than {@link Integer#MAX_VALUE} if there are any restrictions according to the JavaDoc of the method. * This method is used with the transform <code>"AES/CBC/PKCS5Padding"</code> as this is an often used algorithm that is <a href="https://docs.oracle.com/javase/8/docs/technotes/guides/security/StandardNames.html#impl">an implementation requirement for Java SE</a>. * * @return <code>true</code> if restrictions apply, <code>false</code> otherwise * * code by Maarten Bodewes, https://stackoverflow.com/questions/7953567/checking-if-unlimited-cryptography-is-available# */ public static boolean restrictedCryptography() { try { return Cipher.getMaxAllowedKeyLength("AES/CBC/PKCS5Padding") < Integer.MAX_VALUE; } catch (final NoSuchAlgorithmException e) { throw new IllegalStateException("The transform \"AES/CBC/PKCS5Padding\" is not available (the availability of this algorithm is mandatory for Java SE implementations)", e); } } Just call the method with: System.out.println("Java restricted cryptography: " + restrictedCryptography()); That's my output ("false" means unlimited cryptography): Java restricted cryptography: false
A bit late, but I hope this answer could help other developers. I was also trying to achieve the same thing and ended up with a simple solution: Supply a crypto object when calling .authenticate() like this: /** * Prerequisites: * 1. Add * `implementation "androidx.biometric:biometric:1.0.1"` in build.gradle * 2. Add * ` <uses-permission android:name="android.permission.USE_BIOMETRIC" android:requiredFeature="false"/>` * in AndroidManifest.xml */ object BiometricHelper { private const val ENCRYPTION_BLOCK_MODE = KeyProperties.BLOCK_MODE_GCM private const val ENCRYPTION_PADDING = KeyProperties.ENCRYPTION_PADDING_NONE private const val ENCRYPTION_ALGORITHM = KeyProperties.KEY_ALGORITHM_AES private const val KEY_SIZE = 128 private lateinit var biometricPrompt: BiometricPrompt fun authenticate(fragmentActivity: FragmentActivity, authCallback: BiometricPrompt.AuthenticationCallback){ try { if (!fragmentActivity.supportFragmentManager.executePendingTransactions()) { biometricPrompt = createBiometricPrompt(fragmentActivity, authCallback) val promptInfo = createPromptInfo() biometricPrompt.authenticate( promptInfo, cryptoObject //Providing crypto object here will block Iris and Face Scan ) } } catch (e: KeyPermanentlyInvalidatedException) { e.printStackTrace() } catch (e: Exception) { e.printStackTrace() } } private fun createBiometricPrompt(fragmentActivity: FragmentActivity, authCallback: BiometricPrompt.AuthenticationCallback): BiometricPrompt { val executor = ContextCompat.getMainExecutor(fragmentActivity) return BiometricPrompt(fragmentActivity, executor, authCallback) } private fun createPromptInfo(): BiometricPrompt.PromptInfo { return BiometricPrompt.PromptInfo.Builder() .setTitle("Authentication") .setConfirmationRequired(false) .setNegativeButtonText("Cancel") .setDeviceCredentialAllowed(false) //Don't Allow PIN/pattern/password authentication. .build() } //endregion //==================================================================================== //region Dummy crypto object that is used just to block Face, Iris scan //==================================================================================== /** * Crypto object requires STRONG biometric methods, and currently Android considers only * FingerPrint auth is STRONG enough. Therefore, providing a crypto object while calling * [androidx.biometric.BiometricPrompt.authenticate] will block Face and Iris Scan methods */ private val cryptoObject by lazy { getDummyCryptoObject() } private fun getDummyCryptoObject(): BiometricPrompt.CryptoObject { val transformation = "$ENCRYPTION_ALGORITHM/$ENCRYPTION_BLOCK_MODE/$ENCRYPTION_PADDING" val cipher = Cipher.getInstance(transformation) var secKey = getOrCreateSecretKey(false) try { cipher.init(Cipher.ENCRYPT_MODE, secKey) } catch (e: KeyPermanentlyInvalidatedException) { e.printStackTrace() secKey = getOrCreateSecretKey(true) cipher.init(Cipher.ENCRYPT_MODE, secKey) } catch (e: Exception) { e.printStackTrace() } return BiometricPrompt.CryptoObject(cipher) } private fun getOrCreateSecretKey(mustCreateNew: Boolean): SecretKey { val keyStore = KeyStore.getInstance("AndroidKeyStore") keyStore.load(null) if (!mustCreateNew) { keyStore.getKey("dummyKey", null)?.let { return it as SecretKey } } val paramsBuilder = KeyGenParameterSpec.Builder("dummyKey", KeyProperties.PURPOSE_ENCRYPT or KeyProperties.PURPOSE_DECRYPT) paramsBuilder.apply { setBlockModes(ENCRYPTION_BLOCK_MODE) setEncryptionPaddings(ENCRYPTION_PADDING) setKeySize(KEY_SIZE) setUserAuthenticationRequired(true) } val keyGenParams = paramsBuilder.build() val keyGenerator = KeyGenerator.getInstance(KeyProperties.KEY_ALGORITHM_AES, "AndroidKeyStore") keyGenerator.init(keyGenParams) return keyGenerator.generateKey() } //endregion } Gist EDITED: This solution will work only if Face scan and/or Iris scan authentications on that device are considered WEAK methods.
Please check this Here you need to add <a> tag for called the URL. //Create one function for getting URL run time like localhost or any domain name public string GetUrl() { var request = Request; return string.Format("{0}://{1}{2}", request.Url.Scheme, request.Url.Authority, (new System.Web.Mvc.UrlHelper(request.RequestContext)).Content("~")); } private string sendEmail(string emailId,string userID) { try { userID = Encrypt(userID); MailMessage mail = new MailMessage(); mail.To.Add(emailId); mail.From = new MailAddress(""); //mail.Subject = "Your password for account " + emailId; string userMessage = string.Concat("<a href='",GetUrl(), "LoginWithSession/ResetPassword/", userID,"'>"); userMessage = userMessage + "<br/><b>User Id:</b> " + emailId+"</a>"; //userMessage = userMessage + "<br/><b>Passsword: </b>" + password; string Body = "<br/><br/>Please click on link to reset your password:<br/></br> " + userMessage + "<br/><br/>Thanks"; mail.Body = Body; mail.IsBodyHtml = true; SmtpClient smtp = new SmtpClient(); smtp.Host = "smtp.gmail.com"; //SMTP Server Address of gmail smtp.Port = 587; smtp.Credentials = new System.Net.NetworkCredential(); // Smtp Email ID and Password For authentication smtp.EnableSsl = true; smtp.Send(mail); return userMessage; } catch (Exception ex) { return "Error............" + ex; } }
Telegram bots only works with full chained certificates. And the error in your getWebHookInfo: "last_error_message":"SSL error {337047686, error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed}" Is Telegram saying that it needs the whole certificate chain (it's also called CA Bundle or full chained certificate). as answered on the question. If you validate your certificate using the SSLlabs you will see that your domain have chain issues: https://www.ssllabs.com/ssltest/analyze.html?d=www.vallotta-party-bot.com&hideResults=on To solve this need you need to set the CA Certificate. In this way, you need to find the CA certificate file with your CA provider. Also, the best option in production sites is to use gunicorn instead of Flask. If you are using gunicorn, you can do this with command line arguments: $ gunicorn --certfile cert.pem --keyfile key.pem --ca_certs cert.ca-bundle -b 0.0.0.0:443 hello:app Or create a gunicorn.py with the following content: import multiprocessing bind = "0.0.0.0:443" workers = multiprocessing.cpu_count() * 2 + 1 timeout = 120 certfile = "cert/certfile.crt" keyfile = "cert/service-key.pem" ca_certs = "cert/cert.ca-bundle" loglevel = 'info' and run as follows: gunicorn --config=gunicorn.py hello:app If you use Nginx as a reverse proxy, then you can configure the certificate with Nginx, and then Nginx can "terminate" the encrypted connection, meaning that it will accept encrypted connections from the outside, but then use regular unencrypted connections to talk to your Flask backend. This is a very useful setup, as it frees your application from having to deal with certificates and encryption. The configuration items for Nginx are as follows: server { listen 443 ssl; server_name example.com; ssl_certificate /path/to/cert.pem; ssl_certificate_key /path/to/key.pem; # ... } Another important item you need to consider is how are clients that connect through regular HTTP going to be handled. The best solution, in my opinion, is to respond to unencrypted requests with a redirect to the same URL but on HTTPS. For a Flask application, you can achieve that using the Flask-SSLify extension. With Nginx, you can include another server block in your configuration: server { listen 80; server_name example.com; location / { return 301 https://$host$request_uri; } } A good tutorial of how setup your application with https can be found here: Running Your Flask Application Over HTTPS
Is because you haven't defined the method self.logged_in inside your spider class. You are referencing this method here: return [scrapy.FormRequest("https://www.jancisrobinson.com/#login", formdata={'user': 'john', 'pass': 'secret'}, callback=self.logged_in) What this does is that after scrapy does this request, the self.logged_in method will be executed. You need to define this method: class LoginSpider(scrapy.Spider): name = 'wine' start_urls=['https://www.jancisrobinson.com/#login'] def parse(self, response): return scrapy.FormRequest.from_response( response, formdata={'username': '[email protected]', 'password': 'purple'}, callback=self.after_login) def after_login(self, response): if authentication_failed(response): self.logger.error("Login failed") return else: self.logger.error("Login succeeded!") item = SampleItem() item["quote"] = response.css(".text").extract() item["author"] = response.css(".author").extract() return item def start_requests(self): return [scrapy.FormRequest("https://www.jancisrobinson.com/#login", formdata={'user': 'john', 'pass': 'secret'}, callback=self.logged_in)] def logged_in(self,reponse): # do something here pass Or ... You need to change self.logged_in by one of your existing methods. Please let me know if this helps, and if not, don't hesitate to ask any question I'll be glad to help.
If you are running this on a service with a system-assigned Managed Identity, here's what actually happens (example for App Service, VM is slightly different): Your app reads IDENTITY_ENDPOINT and IDENTITY_HEADER environment variables HTTP call to IDENTITY_ENDPOINT using the IDENTITY_HEADER as authentication This endpoint cannot be called from the outside, only from within the instance. Its port is also random In the call, your app specifies it wants a token for Key Vault (resource https://vault.azure.net) The Managed Identity endpoint uses the certificate it has created to authenticate to Azure Active Directory with the Client Credentials flow Azure AD verifies the request and issues a token Managed Identity endpoint returns the token to your app KeyVaultClient uses the token to authorize the call to Key Vault On Virtual Machines, the Instance Metadata Service is used to get tokens. The main thing to understand is that any process running on the instance itself is capable of getting tokens from the Managed Identity. Though if you were to get malicious code running on your instance, other approaches could be in trouble as well :) If you run that code locally, it can work as well. AzureServiceTokenProvider also attempts Visual Studio authentication as well as Azure CLI authentication. So if you are logged in to e.g. AZ CLI, it is able to get an access token for your user to Azure Key Vault and access it. Reference: https://docs.microsoft.com/en-us/azure/app-service/overview-managed-identity?context=azure%2Factive-directory%2Fmanaged-identities-azure-resources%2Fcontext%2Fmsi-context&tabs=dotnet#rest-protocol-examples Example request done in VMs: https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-nonaad#access-data
Sure. You can write your own web server using the http or https modules directly. Here's a very basic server. const http = require('http'); const server = http.createServer((req, res) => { // all incoming http requests to your server will arrive here console.log(req.url); res.end("hi"); }); server.listen(80); Of course, the reason that Express exists and that a lot of people use it is that it makes a lot of things that one might normally do in a web server a lot simpler than coding those things yourself with a plain http server and it gives you access to a ready made library of middleware on NPM (for things like session management, authentication, mime parsing, uploads, etc...). But, nobody requires you to use the higher level framework. You can code that stuff yourself if you want to. Of course, if you really wanted to get down to the lowest level, you could even write your own http server using only the Net module, but then you'd be writing code for the http protocol too.
if the char array of size 20 is storing the word "elephant"(char[0] to char[7]) which is 8 characters long 9 actually, if you count the null terminator at char[8]. what is stored in the rest of the char[] starting from char[8] to char[19]? That content is indeterminate, since you are not initializing the array with any data before reading into the array, and reading a word that is less than the array size does not populate the unused portions of the array. Also, you are not specifying the size of the array when reading, so you have a potential buffer overflow waiting to happen. Imagine what would happen if the user typed in a word that has 20+ characters in length. cin would not know when to stop reading, so it would overflow the buffer into surrounding memory. So, you should be using cin's get() method instead: cin.get(text, sizeof(text)); Or, if you want the text to allow spaces, the getline() method: cin.getline(text, sizeof(text)); get()/getline() ensures the read does not exceed the buffer, and the output is null-terminated (truncating the text if needed) . cin.gcount() will tell you how many characters were actually read. Or better, use a std::string instead: string text; cin >> text; // or: getline(cin, text); This will ensure the full word (or line) is read, regardless of its length. does this text take 8 bytes on the disk or does it take up 20 bytes on the disk since char[] is 20 bytes in size? 20 bytes will be written to the file 1, because that is how many bytes you are telling write() to write (sizeof(text)), regardless of the array's actual content. 1: the actual number of bytes the file takes up on disk depends on multiple factors: the particular filesystem being used, whether the file is sparse or compressed, etc. But lets just assume a simple filesystem with no sparcity/compression. The file will take up however many bytes you write, rounded up to an even multiple of the disk's cluster size, plus overhead for tracking metdata about the file. If you want to write only to the end of the text that was read, and not to the end of the array, then you would need something more like this instead: char text[20] = {}; fstream file("temp.dat", ios::binary|ios::in|ios::out|ios::app); cout << "Enter the text: " << endl; cin.get(text, sizeof(text)); // or: cin.getline(text, sizeof(text)); // file write operation file.write(text, cin.gcount()/*or: strlen(text)*/); // file write operation done Or better: string text; fstream file("temp.dat", ios::binary|ios::in|ios::out|ios::app); cout << "Enter the text: " << endl; cin >> text; // or: getline(cin, text); // file write operation file << text; // or: file.write(text.c_str(), text.size()); // file write operation done
I definitely not recommend to use SPA template for set up WebApi authorization, but if you want to... To accomplish this task you need to override 2 methods in your ApplicationOAuthProvider (inherited from OAuthAuthorizationServerProvider class): public override async Task ValidateClientAuthentication(OAuthValidateClientAuthenticationContext context) { context.Validated(); // Set up your context to be valid every time } public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context) { context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] { "*" }); // Disable CORS policies var identity = new ClaimsIdentity(context.Options.AuthenticationType); identity.AddClaim(new Claim("email", context.UserName)); // Add required claims you want to encrypt in bearer token context.Validated(identity); // Return valid token } You can change validation or grant resources logic in these two methods to accomplish your authorization workflow. At least without any additional conditions we can receive valid access token.
Short version - No. By default, you are creating an electronic signature. Long version (excerpted from a support article on this very topic) An electronic signature, or eSignature, is the broad umbrella category under which all electronic signatures fall. Digital signatures are a specific signature technology implementation of electronic signature. Organizations typically refer to eSignature as the process a person goes through to demonstrate their intent during an electronic transaction whereas a digital signature refers to the encryption technology containing critical metadata pertaining to the e-signature. The eSignature is the legally binding record while the digital signature is the underlying technology that helps verify the authenticity of the transaction. DocuSign Standards-Based Signatures is a core feature of DocuSign’s platform that enables customers to enjoy the full range of signature capabilities while staying compliant with local and industry e-signature standards. In the EU, there are three key signatures in the DocuSign Standards-Based Signature portfolio, Express Signature, EU Advanced Signature, and EU Qualified Signature. Should you want to implement digital signatures with DocuSign, you would want to enable the Standards-Based Signatures feature. Assuming you are a developer using our eSignature API, contact support to have this feature turned on within your demo sandbox account. More specific info on SBS here.
It sounds just like your anonymous authentication has been disabled or your current login user don't have permission to view the public.htm. If you are hosting it in VS, please ensusre Enabled anonymous authentication has been selected and you current logon user have permission to access the htm file. If you are hosting it in IIS, please ensure anonymous authentication has been enabled and the authorization rule would looks just like <authorization> <deny users ="?" /> <allow users = "*" /> </authorization> The authentication in applicationhost.config would looks like <location path="Sitename"> <system.webServer> <security> <authentication> <anonymousAuthentication enabled="true" /> </authentication> </security> </system.webServer> </location> And the authorization rule for public.htm would be. <location path="public.htm"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> Please remember to grant IUSER read permission to access public.htm.
I had a similar problem with my app and found that the best solution is to implement the FIDO protocol which allows users to sign-in using a private key stored securely in their mobile device: During registration with an online service, the user’s client device creates a new key pair. It retains the private key and registers the public key with the online service. Authentication is done by the client device proving possession of the private key to the service by signing a challenge. To visualize it, here's an explanation with a graph of how FIDO works. Why is this the best solution? It's very convenient because your users don't need to do any interaction to log in as long as your app is accessed from the device that holds the private key. You can authenticate them silently from the app. If the user is accessing the app from a different phone, you can send a push notif and show a prompt in your app to approve the login request. This method is secure because: It avoids bad passwords from your users. The private key never leaves the device. This means you never need to store passwords in your database. The asymmetric cryptography ensures that only the device that holds the private key can make a valid signature. How to implement it in Android: For an Android app, the easiest way to implement this is to use Cotter, an authentication service like Firebase but focusing on passwordless login. You can make a free account to get your API keys. Documentation: This is the guide on implementing the FIDO protocol above for Android. If you'd like to do it yourself, you can check out the Android Keystore System.
I've been experimenting with @Ryan's answer, and found that while it is working, a far simpler solution is to use sodium-plus. An example of a sodium-plus script can be found here. In short, the encryption side looks like this: <script type='text/javascript' src='sodium-plus.min.js'></script> <script> async function encryptString(clearText) { if (!window.sodium) window.sodium = await SodiumPlus.auto(); let publicKey = await X25519PublicKey.from('[Place your 64-char public key hex or variable name here]','hex'); let cipherText = await sodium.crypto_box_seal(clearText, publicKey); return cipherText.toString('hex'); } (async function () { let clearText = "String that contains secret."; console.log(await encryptString(clearText)); })(); </script> A lot simpler. On the PHP side, all you'll need to do is use the sodium methods to handle encryption/decryption of strings. The only downside with sodium-plus is that I haven't found a CDN for the browser version yet.
There are three issues in your code: According to the specification PBKDF2 is used with HMAC-SHA1 (and not HMAC-SHA256), s. 3.4.2 Encryption Process The key s derived with PBKDF2WithHmacSHA256 is an instance of PBKDF2KeyImpl, which requires a UTF8 string as password (see docs of the PBKDF2KeyImpl class). Here, however, the password is a hash, which is generally not compatible with UTF8. A possible solution is to replace PBEKeySpec with BouncyCastle's PKCS5S2ParametersGenerator, which expects the password as byte array (in init). For this solution replace SecretKeyFactory factory = SecretKeyFactory.getInstance("PBKDF2WithHmacSHA256"); KeySpec keySpec = new PBEKeySpec(chars, salt, iterationCount, keySize * 8); SecretKey s = factory.generateSecret(keySpec); Key key = new SecretKeySpec(s.getEncoded(), "AES"); with PBEParametersGenerator generator = new PKCS5S2ParametersGenerator(new SHA1Digest()); generator.init(hashedPassword, salt, iterationCount); KeyParameter keyParam = (KeyParameter)generator.generateDerivedParameters(keySize * 8); Key key = new SecretKeySpec(keyParam.getKey(), "AES"); The padding used is ISO10126Padding, so AES/CBC/PKCS7Padding must by replaced by AES/CBC/ISO10126Padding. The easiest way to verify this is to decrypt the target ciphertext (encrypted) without removing the padding (AES/CBC/NoPadding). The last block is 06230276DDC67229EB31E830A1D7500F, which complies with ISO10126Padding. For ISO10126Padding, the last byte specifies the number of padding bytes, which (apart from the last byte) consist of random values. So in this case the last 15 bytes are padding bytes. ISO10126Padding is also the reason why a comparison of the ciphertext on byte level with this.check("Encrypted", Arrays.equals(encrypted, result)); fails. When comparing the ciphertext, the padded block must therefore not be taken into account.
So, start with a way of managing your tokens. Here's a basic model: class Token(models.Model): code = models.CharField(max_length=255) user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) expires = models.DateTimeField() A custom authentication backend can be produced to check the validity of the tokens: class TokenAuthenticationBackend(ModelBackend): def authenticate(self, request, token=None): try: token = Token.objects.get(code=token, expires__gte=now()) except Token.DoesNotExist: return None else: return token.user If you're using class-based views, you could write a mixin that checks for the presence of the token then does your authentication logic: class UrlTokenAuthenticationMixin: def dispatch(self, request, *args, **kwargs): if 'token' in request.GET: user = authenticate(request, request.GET['token']) if user: login(request, user) return super(UrlTokenAuthenticationMixin, self).dispatch(request, *args, **kwargs) To use this on a given view, just declare your views as follows: class MyView(UrlTokenAuthenticationMixin, TemplateView): # view code here For example. An alternative way to implement this as a blanket catch-all would be to use middleware rather than a mixin: class TokenAuthMiddleware: def __init__(self, get_response): self.get_response = get_response def __call__(self, request): if 'token' in request.GET: user = authenticate(request, request.GET['token']) if user: login(request, user) return self.get_response(request)
Your UserService is not properly injected and therefore not correctly initialized, thus giving the null pointer exception. Below are a few things that you can do: In your JWTAuthorization Filter, inject user service: public class JWTAuthorizationFilter extends BasicAuthenticationFilter { private final IUserService UserService; // create IUserService as below public JWTAuthorizationFilter(AuthenticationManager authManager, IUserService userService) { super(authManager); this.userService = userService; } (...) } Remove UserService userService = new UserService(); from your getAuthentication method because you already have the userService instance, injected through the constructor. In your WebSecurity class (or whatever class that implements WebSecurityConfigurerAdapter), do the same, injecting IUserService: private final IUserService userService; public WebSecurity(IUserService userService) { this.userService = userService; } In the same class (WebSecurity) you override the configure method adding the JWTAuthorizationFilter. Now you have to pass the userService. .addFilter(new JWTAuthorizationFilter(authenticationManager(), userService)) Finally, create an interface IUserService public interface IUserService{ ApplicationUser getByUsername(String username); } Change your UserService to implement IUserService (remove @Component because it's actually a @Service). @Service public class UserService implements IUserService{ // keep it as is } Note: You could name your interface as UserService and the concrete implementation as UserServiceImpl, because you will be always using the interface and not the concrete implementation, so it reads better. Bonus: difference between @Component and @Service annotations https://www.baeldung.com/spring-component-repository-service
Checkmarx Heap Inspection Security Vulnerability Hi all, i faced this one when i have taken String type variable for password in my Spring application. Like below class User { private String username; private String password; //setter //getter } Then to resolve this issue I have done following steps : 1. Create SecureString class like below : import java.security.SecureRandom; import java.util.Arrays; /** * This is not a string but a CharSequence that can be cleared of its memory. * Important for handling passwords. Represents text that should be kept * confidential, such as by deleting it from computer memory when no longer * needed or garbage collected. */ /** * Created by Devendra on 16/04/2020 */ public class SecureString implements CharSequence { private final int[] chars; private final int[] pad; public SecureString(final CharSequence original) { this(0, original.length(), original); } public SecureString(final int start, final int end, final CharSequence original) { final int length = end - start; pad = new int[length]; chars = new int[length]; scramble(start, length, original); } @Override public char charAt(final int i) { return (char) (pad[i] ^ chars[i]); } @Override public int length() { return chars.length; } @Override public CharSequence subSequence(final int start, final int end) { return new SecureString(start, end, this); } /** * Convert array back to String but not using toString(). See toString() docs * below. */ public String asString() { final char[] value = new char[chars.length]; for (int i = 0; i < value.length; i++) { value[i] = charAt(i); } return new String(value); } /** * Manually clear the underlying array holding the characters */ public void clear() { Arrays.fill(chars, '0'); Arrays.fill(pad, 0); } /** * Protect against using this class in log statements. * <p> * {@inheritDoc} */ @Override public String toString() { return "Secure:XXXXX"; } /** * Called by garbage collector. * <p> * {@inheritDoc} */ @Override public void finalize() throws Throwable { clear(); super.finalize(); } /** * Randomly pad the characters to not store the real character in memory. * * @param start start of the {@code CharSequence} * @param length length of the {@code CharSequence} * @param characters the {@code CharSequence} to scramble */ private void scramble(final int start, final int length, final CharSequence characters) { final SecureRandom random = new SecureRandom(); for (int i = start; i < length; i++) { final char charAt = characters.charAt(i); pad[i] = random.nextInt(); chars[i] = pad[i] ^ charAt; } } } Created custom property editor as : import java.beans.PropertyEditorSupport; import org.springframework.util.StringUtils; public class SecureStringEditor extends PropertyEditorSupport { @Override public String getAsText() { SecureString value =(SecureString) getValue(); SecureString secStr = new SecureString(value); return (value != null) ? secStr.asString() : ""; } @Override public void setAsText(String text) throws java.lang.IllegalArgumentException { if (StringUtils.isEmpty(text)) { setValue(null); } else { setValue(new SecureString(text)); } } } Register this custom property editor to spring-bean.xml file as :
I believe there is no single bullet-proof solution to your problem. But still it is possible to mitigate the risks by applying at least the following changes (if applicable to your application): User must register before being able to load/define a dataset URL. The registration process has to be secure, strict and robust enough to push back most malicious guys. Ask for email, verification of email, captcha, eventual personal or company information. Then enable Two-Factor authentication with phone number or anything similar that will even more "characterize" or "personalize" the user. Once the registration is secure and trusted then you will fear less about malicious changes to URLs. Manually verify each updated dataset (URL and site). Even better to automatize but quite hard. There are sandboxing tools for web sites. The tool could connect to the URL and perform security checks. I let you Google that. Use free or paid online site security checker. Each time an URL is updated, perform a security scan on this URL using tools like https://www.virustotal.com/ or https://sitecheck.sucuri.net/. I also let you Google about that. Eventually enforce high security standards for companies publishing their datasets. For instance request ISO27001 compliance. Not the most friendly solution but this is a guarantee of quality. Also request TLS-secured URLs and be sure certificates are valid when the URL is set (can be easily tested programmatically). You may also raise alerts or trigger manual verifications only when the domain changes in that case. Offer all users the possibility to signal a broken or malicious link I will edit my post if something else comes into mind.. but I think the above propositions could already raise significantly the security level of your website. Of course don't forget to implement your own security by doing regular security assessment and penetration testing for example, and think security by design.
What you want, by reading your problem, is to have two authentication types (token and httpBasic) for two diffetent endpoints. It can be achieved by creating two different WebSecurityConfigurerAdapter beans. Spring Boot enables this and can be done like bellow: @Order(1) - /resource|user|appointment/** protected by bearer token authentication. @Order(2) - /internal/** protected by basic auth. View docs for Spring Boot and sample code here. @EnableWebSecurity public class SecurityConfig { @Configuration @Order(1) public class ApiSecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http .antMatcher("/resource/**") .antMatcher("/user/**") .antMatcher("/appointment/**") .authorizeRequests() .anyRequest().authenticated() .and() .sessionManagement().sessionCreationPolicy(SessionCreationPolicy.STATELESS) .and().addFilterBefore(jwtTokenFilter(), UsernamePasswordAuthenticationFilter.class); } } @Configuration @Order(2) public class WebSecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http.csrf().disable() .authorizeRequests() .antMatchers("/internal/**") .and() .httpBasic(); } } }
You could make this.user an RxJS BehaviorSubject observable as well. So now you can push the user to the observable and it can be listened to in the changeUsername function. Then switchMap operator could be used to map the user value to the HTTP request. Note that if the user value is still null, we return an RxJS EMPTY instead of the request. Try the following import { BehaviorSubject, EMPTY, Observable } import 'rxjs'; import { switchMap, take } import 'rxjs/operators'; @Injectable({ providedIn: 'root' }) export class SettingsService implements OnInit { user = new BehaviorSubject<User | null>(null); userSubscription: Subscription; constructor( private http: HttpClient, private authService: AuthenticationService ) {} ngOnInit(): void { this.userSubscription = this.authService.getUser().subscribe((user) => { if (user !== null) { this.user.next(user); // <-- push it to the observable // `return` isn't required here } }); } changeUsername(username: string): Observable<{ message: string }> { console.log(username); return this.user.pipe( take(1), switchMap(user => { console.log(user.username); if (user) { return this.http.post<{ message: string }>(`${environment.apiUri}/users/${this.user.username}/username`, { username }); } else { return EMPTY; } }); ); } } Update from comment You could still subscribe to changeUsername function as before. Only difference is the call is made only if the this.user variable isn't null. You could also return an error (eg. using throwError) instead of EMPTY and retry the request after some time using retryWhen operator with a delay (eg. using timer).
I was able to get this working after like a week - working Arduino documentation on integration with other systems is crap : ) Working Arduino code: #include "mbedtls/aes.h" #include <Arduino.h> #include <HTTPClient.h> #include <base64.h> void makeUpdateAPICall() { if (WiFi.status() == WL_CONNECTED) { HTTPClient http; // Your Domain name with URL path or IP address with path http.begin(serverName); // Specify content-type header http.addHeader("Content-Type", "text/plain"); http.addHeader("Authorization", "Bearer XXXXXXXX [whatever your web token is]"); http.addHeader("X-Content-Type-Options", "nosniff"); http.addHeader("X-XSS-Protection", "1; mode=block"); //AES Encrypt esp_aes_context aesOutgoing; unsigned char key[32] = "1234567812345678123456781234567" ; key[31] = '8'; // we replace the 32th (index 31) which contains '/0' with the '8' char. char *input = "Tech tutorials x"; unsigned char encryptOutput[16]; mbedtls_aes_init(&aesOutgoing); mbedtls_aes_setkey_enc(&aesOutgoing, key, 256); int encryptAttempt = mbedtls_aes_crypt_ecb(&aesOutgoing, MBEDTLS_AES_ENCRYPT, (const unsigned char *)input, encryptOutput); USE_SERIAL.println(); USE_SERIAL.println("MBEDTLS_AES_EBC encryption result:\t "); USE_SERIAL.print(encryptAttempt); //0 means that the encrypt/decrypt function was successful USE_SERIAL.println(); mbedtls_aes_free(&aesOutgoing); int encryptSize = sizeof(encryptOutput) / sizeof(const unsigned char); USE_SERIAL.println("Size of AES encrypted output: "); USE_SERIAL.println(encryptSize); //Base 64 Encrypt int inputStringLength = sizeof(encryptOutput); int encodedLength = Base64.decodedLength((char *)encryptOutput, inputStringLength); char encodedCharArray[encodedLength]; Base64.encode(encodedCharArray, (char *)encryptOutput, inputStringLength); //Send to server USE_SERIAL.print("Sending to server."); int httpResponseCode = http.POST(encodedCharArray); String payload = "{}"; if (httpResponseCode > 0) { //Retrieve server response payload = http.getString(); } // Free resources http.end(); } WiFi.disconnect(); } Working Java code: public static String decrypt(String strToDecrypt, String key) { byte[] encryptionKeyBytes = key.getBytes(); Cipher cipher; try { cipher = Cipher.getInstance("AES/ECB/NoPadding"); SecretKey secretKey = new SecretKeySpec(encryptionKeyBytes, "AES"); cipher.init(Cipher.DECRYPT_MODE, secretKey); return new String(cipher.doFinal(Base64.getDecoder().decode(strToDecrypt.getBytes("UTF-8")))); } catch (NoSuchAlgorithmException | NoSuchPaddingException e) { e.printStackTrace(); } catch (InvalidKeyException e) { e.printStackTrace(); } catch (IllegalBlockSizeException e) { e.printStackTrace(); } catch (BadPaddingException e) { e.printStackTrace(); } catch (UnsupportedEncodingException e) { // TODO Auto-generated catch block e.printStackTrace(); } return null; } Working on the return process now. You call the Java side with this code: final String decryptedText = AES.decrypt(encryptedStr, "12345678123456781234567812345678"); System.out.println("Decrypted AES ECB String: "); System.out.println(decryptedText); Wanted to provide this for any poor slob who finds him/herself in the same boat : ) Hope this helps!
You do not need to add sudo to your commands when you are using become: true. You can checkout become_method from documentation. It will append sudo for you when you use become: true. I highly recommend to read documentation for privilege escalation: https://docs.ansible.com/ansible/latest/user_guide/become.html UPDATE I did misunderstood your question sorry. The default become_method is sudo in ansible.cfg. When you set become: true without specifying become_method it will basically add a sudo prefix to your cmd. Here i created a example: # privilege_escalation.yaml --- - name: privilege escalation hosts: localhost tasks: - name: command without any escalation shell: env - name: command with sudo shell: sudo env - name: command with become and sudo shell: sudo env become: yes You can run example with this command: ansible-playbook -vvv --ask-become-pass privilege_escalation.yaml The first task will run env. In the results you can see USER=your_user line that represents current user. When you use sudo in command, second task will run sudo env. In the results you can see USER=root and SUDO_USER=your_user. This means you escalated your privileges to become root when running env command. SUDO_USER environment variable represents the user who invoked sudo. The last task will run sudo sudo env. In the results you can see USER=root and SUDO_USER=root. This means the first you become root user, after that root user executed sudo env command. I hope this helps.
The similar requirements implemented below works for me: @Configuration @EnableWebSecurity public class ServerSecurityConfig { @Configuration @Order(1) public static class CustomAutorizeURLSecurityConfig extends WebSecurityConfigurerAdapter { @Autowired @Qualifier("sudoUserDetailsService") private UserDetailsService sudoUserDetailsService; @Override @Bean public AuthenticationManager authenticationManagerBean() throws Exception { return super.authenticationManagerBean(); } @Override protected void configure(AuthenticationManagerBuilder auth) throws Exception { auth.userDetailsService(sudoUserDetailsService); } @Override protected void configure(HttpSecurity http) throws Exception { http.antMatcher("/oauth/custom_authorize") .csrf().disable() .authorizeRequests() .anyRequest().authenticated() .and() .httpBasic() ; http .sessionManagement() .sessionCreationPolicy(SessionCreationPolicy.NEVER); } } @Configuration @Order(2) @Import(Encoders.class) public static class OtherURLSecurityConfig extends WebSecurityConfigurerAdapter { @Autowired @Qualifier("customUserDetailsService") private UserDetailsService customUserDetailsService; @Autowired private PasswordEncoder userPasswordEncoder; @Override @Bean public AuthenticationManager authenticationManagerBean() throws Exception { return super.authenticationManagerBean(); } @Override protected void configure(AuthenticationManagerBuilder auth) throws Exception { auth.userDetailsService(customUserDetailsService).passwordEncoder(userPasswordEncoder); } @Override protected void configure(HttpSecurity http) throws Exception { http .csrf().disable() .authorizeRequests() .antMatchers("/resources/**").permitAll() .antMatchers("/shutdown").permitAll() .antMatchers("/health").permitAll() .antMatchers("/info").permitAll() .anyRequest().authenticated() .and() .formLogin() .loginPage("/login") .permitAll() ; http .sessionManagement() .sessionCreationPolicy(SessionCreationPolicy.NEVER); } } } It implements 2 authentication mechanisms - 1) Basic Authentication and 2) Form Login Authentication
The background information at bottom indicates that you do not need to create a new SoupSession to make subsequent authentication requests. It is not clear though that the soup_auth_authenticate() call is the method to do that. Following is the list of authentication related calls from this libsoup page: SoupAuth * soup_auth_new () gboolean soup_auth_update () gboolean soup_auth_negotiate_supported () gboolean soup_auth_is_for_proxy () const char * soup_auth_get_scheme_name () const char * soup_auth_get_host () const char * soup_auth_get_realm () char * soup_auth_get_info () void soup_auth_authenticate () gboolean soup_auth_can_authenticate () gboolean soup_auth_is_authenticated () gboolean soup_auth_is_ready () char * soup_auth_get_authorization () GSList * soup_auth_get_protection_space () void soup_auth_free_protection_space () Reading between the lines in this Basics page seems to suggest it is possible to make multiple authentication requests in a single SoupSession. Handling Authentication SoupSession handles most of the details of HTTP authentication for you. If it receives a 401 ("Unauthorized") or 407 ("Proxy Authentication Required") response, the session will emit the authenticate signal, providing you with a SoupAuth object indicating the authentication type ("Basic", "Digest", or "NTLM") and the realm name provided by the server. If you have a username and password available (or can generate one), call soup_auth_authenticate to give the information to libsoup. The session will automatically requeue the message and try it again with that authentication information. (If you don't call soup_auth_authenticate, the session will just return the message to the application with its 401 or 407 status.) If the server doesn't accept the username and password provided, the session will emit authenticate again, with the retrying parameter set to TRUE. This lets the application know that the information it provided earlier was incorrect, and gives it a chance to try again. If this username/password pair also doesn't work, the session will contine to emit authenticate again and again until the provided username/password successfully authenticates, or until the signal handler fails to call soup_auth_authenticate, at which point libsoup will allow the message to fail (with status 401 or 407). If you need to handle authentication asynchronously (eg, to pop up a password dialog without recursively entering the main loop), you can do that as well. Just call soup_session_pause_message on the message before returning from the signal handler, and g_object_ref the SoupAuth. Then, later on, after calling soup_auth_authenticate (or deciding not to), call soup_session_unpause_message to resume the paused message. This Manpagez post also discusses more than one call to authenticate per session: Most applications will only need a single SoupSession; the primary reason you might need multiple sessions is if you need to have multiple independent authentication contexts. (Eg, you are connecting to a server and authenticating as two different users at different times; the easiest way to ensure that each SoupMessage is sent with the authentication information you intended is to use one session for the first user, and a second session for the other user.)
1) windows pop up is not displayed to enter username and password If you use IIS to host the web application, you should enable the windows auth by using IIS management console or modify the web.config file. As far as I know the launchsettings.json has below feature: Is only used on the local development machine. Is not deployed. contains profile settings. That means the windows authentication setting will not be enabled in IIS after deployed. You should enable it by using IIS management console. Open the authentication: Disable the anonymous authentication and enable the windows authentication Or you could add below settings in your web.config <system.webServer> <security> <authentication> <anonymousAuthentication enabled="false" /> <windowsAuthentication enabled="true" /> </authentication> </security> </system.webServer> If you want to use kestrel instead of IIS to host the application, you should use HTTP.SYS in your application. More details about how to use it ,you could refer to this article. 2) I need to display Windows User Name and Directory details(Department eg: IT, Sales department based on username) If you want to get the user name, I suggest you could try to use User.Identity.Name. If you want to get current user's department, I suggest you could try to use this solution.
logging with wrong credentials throws 401 - unauthorized error Because you are doing : .failureUrl("/guest/login") .failureHandler(new MyAuthenticationFailureHandler()) and what are done by failureUrl() will be reset by the subsequent failureHandler().So the customised SimpleUrlAuthenticationFailureHandler do not configure with failureUrl yet and hence it will send 401 if the authentication fails since it does know which URL to redirect to.Change it to : .failureHandler(new MyAuthenticationFailureHandler("/guest/login")) should redirect to "/guest/login" if authentication fails. I noticed that if I type anyRequest().authenticated() in authorizeRequest() everything works, but I have no CSS on my login page, but if I type anyRequest().permitAll(), I have CSS on my site, Because in case of anyRequest().authenticated() , the CSS 's URL also required an authenticated user to access. But in login page , the user must not be authenticated. Because if they are authenticated , it does not make sense that they can go to login page.So no CSS will be shown in login page since only unauthenticated users can go to it. You have to exclude all the related url resources required by login page to work from any protections by configuring WebSecurity. Everyone should access them : public void configure(WebSecurity web) throws Exception { web.ignoring() .antMatchers("/css/**") .antMatchers("/anyThingRequiredByLoginPageToWork/**"); }
Clear your cookies and try again, and see if you can reduce the size and amount of cookies your app is using. When you set the registry key value you make sure you consider the below points: 1) Calculate the size of the user's Kerberos token by using the formula that's described in the following Knowledge Base article: 327825 Problems with Kerberos authentication when a user belongs to many groups 2) Set the value of MaxFieldLength and MaxRequestBytes on the server to 4/3 * T, where T is the user's token size in bytes. HTTP encodes the Kerberos token by using base64 encoding. https://support.microsoft.com/en-us/help/2020943/http-400-bad-request-request-header-too-long-response-to-http-request Note: Make sure you restarted the machine after doing changes. you could also try to add below code in your site web.config file: <configuration> <system.webServer> <security> <requestFiltering> <requestLimits maxAllowedContentLength="500000000" /> </requestFiltering> </security> <system.webServer> <system.web> <httpRuntime maxRequestLength="500000000" executionTimeout="120" /> </system.web> </configuration> if you still face same issue try to use the fiddler or any other tool to capture network traffic and properly analyze the request and response header.
I had a similar issue recently and I think you can achieve what you are looking for like so : In your security config add : @Override protected void configure(HttpSecurity http) throws Exception { ... http.exceptionHandling().accessDeniedHandler(accessDeniedHandler()) .authenticationEntryPoint(authenticationEntryPoint()) ... } /** * @return Custom {@link AuthenticationEntryPoint} to send suitable response in the event of a * failed authentication attempt. */ @Bean public AuthenticationEntryPoint authenticationEntryPoint() { return new CustomAuthenticationEntryPoint(); } Create your CustomAuthenticationEntryPoint class and write out any custom message : public class CustomAuthenticationEntryPoint implements AuthenticationEntryPoint { @Override public void commence(HttpServletRequest request, HttpServletResponse response, AuthenticationException authException) throws IOException { response.setStatus(HttpStatus.UNAUTHORIZED.value()); Map<String, Object> data = new HashMap<>(); data.put("message", "Your message"); data.put("timestamp",LocalDateTime.now()); OutputStream out = response.getOutputStream(); ObjectMapper mapper = new ObjectMapper(); mapper.writeValue(out, data); out.flush(); } } Hope this helps !
There seems to be a bit of misunderstanding in the comments, so I'll start by explaining the problem a little more: Type juggling refers to the behaviour of PHP whereby variables are implicitly cast to different data types under certain conditions. For example, all the following logical expressions will evaluate to true in PHP: 0 == 0 // int vs. int "0" == 0 // str -> int "abc" == 0 // any non-numerical string -> 0 "1.234E+03" == "0.1234E+04" // string that looks like a float -> float "0e215962017" == 0 // another string that looks like a float The last of these examples is interesting because its MD5 hash value is another string consisting of 0e followed by a bunch of decimal digits (0e291242476940776845150308577824). So here's another logical expression in PHP that will evaluate to true: "0e215962017" == md5("0e215962017") To solve this CTF challenge, you have to find a string that is "equal" to its own hash value, but using the RIPEMD160 algorithm instead of MD5. When this is provided as a query string variable (e.g., ?hash=0e215962017), then the PHP script will disclose the value of a flag. Fake hash collisions like this aren't difficult to find. Roughly 1 in every 256 MD5 hashes will start with '0e', and the probability that the remaining 30 characters are all digits is (10/16)^30. If you do the maths, you'll find that the probability of an MD5 hash equating to zero in PHP is approximately one in 340 million. It took me about a minute (almost 216 million attempts) to find the above example. Exactly the same method can be used to find similar values that work with RIPEMD160. You just need to test more hashes, since the extra hash digits mean that the probability of a "collision" will be approximately one in 14.6 billion. Quite a lot, but still tractable (in fact, I found a solution to this challenge in about 15 minutes, but I'm not posting it here). Your code, on the other hand, will take much, much longer to find a solution. First of all, there is absolutely no point in generating random inputs. Sequential values will work just as well, and will be much faster to generate. If you use sequential input values, then you also won't need to worry about repeating the same hash calculations. Your code uses a list structure to store previously hashed values. This is a terrible idea. Searching for an item in a list is an O(n) operation, so once your code has (unsuccessfully) tested a billion inputs, it will have to compare every new input against each of these billion inputs at each iteration, causing your code to grind to a complete standstill. Your code would actually run a lot faster if you didn't bother checking for duplicates. When you have time, I suggest you learn when to use lists, dicts and sets in Python. Another problem is that your code only tests 10-digit numbers, which means it can only test a maximum of 10 billion possible inputs. Based on the numbers given above, are you sure this is a sensible limit? Finally, your code is printing every single input string before you calculate its hash. Before your program outputs a solution, you can expect it to print out somewhere in the order of a billion screenfuls of incorrect guesses. Is there any point in doing this? No. Here's the code I used to find the MD5 collision I mentioned earlier. You can easily adapt it to work with RIPEMD160, and you can convert it to Python if you like (although the PHP code is much simpler): $n = 0; while (1) { $s = "0e$n"; $h = md5($s); if ($s == $h) break; $n++; } echo "$s : $h\n"; Note: Use PHP's hash_equals() function and strict comparison operators to avoid this sort of vulnerability in your own code.
Here you have some code for oauth 2.0 in Dropbox, Linkedin and MS Live scope: https://github.com/microsoft/cpprestsdk/blob/master/Release/samples/Oauth2Client/Oauth2Client.cpp Other samples within C++ Rest SDK: https://github.com/microsoft/cpprestsdk/tree/master/Release/samples First of all, you have to distinguish: 1. MS Graph authentication - which is, in fact, Azure Access Directory/Microsoft identity platform authentication, based on oauth 2.0 (short name: MSAL) 2. Accessing the MS Graph API using access token from the authentication process (in the standard process you should use MS Graph SDK) For the C++ there is no MSAL or SDK library. So - for authentication, you should use oauth 2.0 example which I pasted above. Because you need to write everything on your own, please read deeply docs about authentication for MS Graph https://docs.microsoft.com/en-us/graph/auth/ Here you can watch all the needed endpoints, secrets etc. for sample Postman calls: https://docs.microsoft.com/en-us/graph/use-postman#set-up-on-behalf-of-api-calls https://developer.microsoft.com/en-us/graph/blogs/30daysmsgraph-day-13-postman-to-make-microsoft-graph-calls/ In the URLs there are the following variables used: Callback URL: https://app.getpostman.com/oauth2/callback Auth URL: https://login.microsoftonline.com/**TENANTID**/oauth2/v2.0/authorize Access Token URL: https://login.microsoftonline.com/**TENANTID**/oauth2/v2.0/token Client ID: CLIENTID Client Secret: CLIENTSECRET Scope: https://graph.microsoft.com/.default State: RANDOMSTRING For the API calls, read about Microsoft Graph REST API v1.0 reference https://docs.microsoft.com/en-us/graph/api/overview?toc=./ref/toc.json&view=graph-rest-1.0
The first thing you need to do is a risk assessment, basically: what do you trust completely what you can trust somehow what you cannot trust In the first category you may have authentication tokens when they come form a third party, or are input by the user. In the second one, you may have something which identifies your mobile device but is not easily known to third parties (the factory ID of the device for instance, or the incoming IP) In the third one you will have information which is sent by the device such as "this is the user that is connecting" Depending on this analysis, you will end up with some solutions. To take a fictious example: full access to data when the caller is identified by multi-factor authentication limited information when connecting from a known IP public information when accessing with a correct JSON schema rejecting incomprehensible requests There are no miracles, at some point you need to trust something - it is a matter how easily this information can be obtained or modified by the attacker.
Q: "The goal is to store the IP addresses in a vault.yml file, what would be the best way to accomplish this?" A: Ansible doesn't have to know the IP address of the remote host as long as either the alias or ansible_host is resolvable. See Connecting to hosts: behavioral inventory parameters. For example, let's create an inventory file shell> cat hosts [srv] srv1 ansible_host=srv1.example.com srv2 ansible_host=srv2.example.com srv3 ansible_host=srv3.example.com Then create the vault file with the IP addresses. For example shell> cat group_vars/srv/ip.yml srv_ip: srv1: 192.168.1.11 srv2: 192.168.1.12 srv3: 192.168.1.13 Encrypt the file shell> ansible-vault encrypt group_vars/srv/ip.yml Encryption successful Now it's possible to use the encrypted file in the playbook. For example shell> cat pb.yml - hosts: srv tasks: - debug: var: srv_ip[inventory_hostname] gives shell> ansible-playbook -i hosts pb.yml ok: [srv2] => { "srv_ip[inventory_hostname]": "192.168.1.12" } ok: [srv1] => { "srv_ip[inventory_hostname]": "192.168.1.11" } ok: [srv3] => { "srv_ip[inventory_hostname]": "192.168.1.13" }
You don't need to pass more parameters to JWT auth function. Here is what I use for this scenario. First I assume that your JWTs have the user role info in their payload. Passport is just for authenticating JWT. It checks if JWT is valid or not. If it is valid it parses the JWT payload for you to use. This code is from official documentation of Passport JWT passport.use(new JwtStrategy(opts, function(jwt_payload, done) { User.findOne({id: jwt_payload.sub}, function(err, user) { if (err) { return done(err, false); } if (user) { return done(null, user); } else { return done(null, false); } }); })); As you can see, it finds the user by using JWT SUB from parsed JWT. jwt_payload.sub If you put your roles in your JWT you can do something like: jwt_payload.roles Then notice that, if the user found it calls done with second parameter "user". It is just giving something to passport to be put in request object. So you can use it from your requst object like: req.user Now instead of passing a user instance, you can pass a user object. const user = { instance: user, roles: jwt_payload.roles // or what is ok for you } return done(null, user); Now remember, ExpressJS middleware logic. You can create a role checker middleware and use it after jwt authentication method. const roleCheckMiddleware = (req, res, next, roles) => { // if req.roles does not contain given roles, return response with status code forbidden. }; And define your route with necessary calls. app.post( '/addFeature', passport.authenticate('jwt', {session: false}), (req, res, next) => { roleCheckMiddleware(req, res, next, ['admin', 'manager'] } );
You have to check your email provider and its SMTP settings; server, port and encryption method. The following code snippet works with me Put //1) get the session object Properties properties = new Properties(); properties.put("mail.smtp.auth", "true"); // You have missed this line. properties.put("mail.smtp.starttls.enable", "true"); // This SMTP server works with me for all Microsoft email providers, like: - // Outlook, Hotmail, Live, MSN, Office 365 and Exchange. properties.put("mail.smtp.host", "smtp.live.com"); properties.put("mail.smtp.port", "587"); properties.put("mail.smtp.user", user); properties.put("mail.smtp.pwd", password); Session session = Session.getInstance(properties, null); session.setDebug(true); // To trace the code implementation. Transport transport = session.getTransport("smtp"); transport.connect("smtp.live.com", 587, user, password); transport.close(); instead of props.put("mail.smtp.port", 465); props.put("mail.smtp.socketFactory.port", 465); props.put("mail.smtp.socketFactory.class", "javax.net.ssl.SSLSocketFactory"); props.put("mail.smtp.socketFactory.fallback", "false"); props.put("mail.smtp.auth", "true"); props.put("mail.debug", "true"); props.put("mail.smtp.host", _server); session = Session.getInstance(props, this); try { transport = session.getTransport("smtp"); transport.connect("mail.company.com",_user,_pass); transport.close(); I found this website so helpful, in getting other email providers SMTP settings information.
I ran into the same problem. After many hours, a solution was found. My code is based on this question1 and question2 Code on C# using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Security.Cryptography; namespace ConsoleApplication2 { class Program { static void Main(string[] args) { var m_strPassPhrase = "YYYYYYYYYYYYYYYYYYY"; var p_strSaltValue = "XXXXXXXXXXXXXXXXX"; var m_strPasswordIterations = 2; var m_strInitVector = "ZZZZZZZZZZZZZZZZ"; var plainText = "myPassword"; var blockSize = 32; var saltValueBytes = Encoding.ASCII.GetBytes(p_strSaltValue); var password = new Rfc2898DeriveBytes(m_strPassPhrase, saltValueBytes, m_strPasswordIterations); var keyBytes = password.GetBytes(blockSize); var symmetricKey = new RijndaelManaged(); var initVectorBytes = Encoding.ASCII.GetBytes(m_strInitVector); var encryptor = symmetricKey.CreateEncryptor(keyBytes, initVectorBytes); var memoryStream = new System.IO.MemoryStream(); var cryptoStream = new CryptoStream(memoryStream, encryptor, CryptoStreamMode.Write); var plainTextBytes = Encoding.UTF8.GetBytes(plainText); cryptoStream.Write(plainTextBytes, 0, plainTextBytes.Length); cryptoStream.FlushFinalBlock(); var cipherTextBytes = memoryStream.ToArray(); memoryStream.Close(); cryptoStream.Close(); var cipherText = Convert.ToBase64String(cipherTextBytes); Console.WriteLine(cipherText); Console.WriteLine("\n end"); } } } For flutter you can use pointycastle Code on Dart(use decryptString and cryptString methods): import 'dart:convert'; import 'package:pointycastle/block/aes_fast.dart'; import 'dart:typed_data'; import 'package:pointycastle/export.dart'; import 'package:pointycastle/key_derivators/pbkdf2.dart'; import 'package:pointycastle/paddings/pkcs7.dart'; import 'package:pointycastle/pointycastle.dart'; const KEY_SIZE = 32; // 32 byte key for AES-256 const ITERATION_COUNT = 2; const SALT = "XXXXXXXXXXXXXXXXX"; const INITIAL_VECTOR = "ZZZZZZZZZZZZZZZZ"; const PASS_PHRASE = "YYYYYYYYYYYYYYYYYYY"; Future<String> cryptString(String text) async { String encryptedString = ""; final mStrPassPhrase = toUtf8(PASS_PHRASE); encryptedString = AesHelper.encrypt(mStrPassPhrase, toUtf8(text), mode: AesHelper.CBC_MODE); return encryptedString; } Future<String> decryptString(String text) async { String decryptedString = ""; final mStrPassPhrase = toUtf8(PASS_PHRASE); decryptedString = AesHelper.decrypt(mStrPassPhrase, toUtf8(text), mode: AesHelper.CBC_MODE); return decryptedString; } ///MARK: AesHelper class class AesHelper { static const CBC_MODE = 'CBC'; static const CFB_MODE = 'CFB'; static Uint8List deriveKey(dynamic password, {String salt = '', int iterationCount = ITERATION_COUNT, int derivedKeyLength = KEY_SIZE}) { if (password == null || password.isEmpty) { throw new ArgumentError('password must not be empty'); } if (password is String) { password = createUint8ListFromString(password); } Uint8List saltBytes = createUint8ListFromString(salt); Pbkdf2Parameters params = new Pbkdf2Parameters(saltBytes, iterationCount, derivedKeyLength); KeyDerivator keyDerivator = new PBKDF2KeyDerivator(new HMac(new SHA1Digest(), 64)); keyDerivator.init(params); return keyDerivator.process(password); } static Uint8List pad(Uint8List src, int blockSize) { var pad = new PKCS7Padding(); pad.init(null); int padLength = blockSize - (src.length % blockSize); var out = new Uint8List(src.length + padLength)..setAll(0, src); pad.addPadding(out, src.length); return out; } static Uint8List unpad(Uint8List src) { var pad = new PKCS7Padding(); pad.init(null); int padLength = pad.padCount(src); int len = src.length - padLength; return new Uint8List(len)..setRange(0, len, src); } static String encrypt(String password, String plaintext, {String mode = CBC_MODE}) { String salt = toASCII(SALT); Uint8List derivedKey = deriveKey(password, salt: salt); KeyParameter keyParam = new KeyParameter(derivedKey); BlockCipher aes = new AESFastEngine(); var ivStr = toASCII(INITIAL_VECTOR); Uint8List iv = createUint8ListFromString(ivStr); BlockCipher cipher; ParametersWithIV params = new ParametersWithIV(keyParam, iv); switch (mode) { case CBC_MODE: cipher = new CBCBlockCipher(aes); break; case CFB_MODE: cipher = new CFBBlockCipher(aes, aes.blockSize); break; default: throw new ArgumentError('incorrect value of the "mode" parameter'); break; } cipher.init(true, params); Uint8List textBytes = createUint8ListFromString(plaintext); Uint8List paddedText = pad(textBytes, aes.blockSize); Uint8List cipherBytes = _processBlocks(cipher, paddedText); return base64.encode(cipherBytes); } static String decrypt(String password, String ciphertext, {String mode = CBC_MODE}) { String salt = toASCII(SALT); Uint8List derivedKey = deriveKey(password, salt: salt); KeyParameter keyParam = new KeyParameter(derivedKey); BlockCipher aes = new AESFastEngine(); var ivStr = toASCII(INITIAL_VECTOR); Uint8List iv = createUint8ListFromString(ivStr); Uint8List cipherBytesFromEncode = base64.decode(ciphertext); Uint8List cipherIvBytes = new Uint8List(cipherBytesFromEncode.length + iv.length) ..setAll(0, iv) ..setAll(iv.length, cipherBytesFromEncode); BlockCipher cipher; ParametersWithIV params = new ParametersWithIV(keyParam, iv); switch (mode) { case CBC_MODE: cipher = new CBCBlockCipher(aes); break; case CFB_MODE: cipher = new CFBBlockCipher(aes, aes.blockSize); break; default: throw new ArgumentError('incorrect value of the "mode" parameter'); break; } cipher.init(false, params); int cipherLen = cipherIvBytes.length - aes.blockSize; Uint8List cipherBytes = new Uint8List(cipherLen) ..setRange(0, cipherLen, cipherIvBytes, aes.blockSize); Uint8List paddedText = _processBlocks(cipher, cipherBytes); Uint8List textBytes = unpad(paddedText); return new String.fromCharCodes(textBytes); } static Uint8List _processBlocks(BlockCipher cipher, Uint8List inp) { var out = new Uint8List(inp.lengthInBytes); for (var offset = 0; offset < inp.lengthInBytes;) { var len = cipher.processBlock(inp, offset, out, offset); offset += len; } return out; } } ///MARK: HELPERS Uint8List createUint8ListFromString(String s) { Uint8List ret = Uint8List.fromList(s.codeUnits); return ret; } String toUtf8(value) { var encoded = utf8.encode(value); var decoded = utf8.decode(encoded); return decoded; } String toASCII(value) { var encoded = ascii.encode(value); var decoded = ascii.decode(encoded); return decoded; }
In my opinion this is not necessary to login or logout. You may even find an example in documentation without login or logout: - stage: Build displayName: Build and push stage jobs: - job: Build displayName: Build job pool: vmImage: $(vmImageName) steps: - task: Docker@2 displayName: Build and push an image to container registry inputs: command: buildAndPush repository: $(imageRepository) dockerfile: $(dockerfilePath) containerRegistry: $(dockerRegistryServiceConnection) tags: | $(tag) So you may wonder what actually login does. If you check a source code you will find that it actually set up DOCKER_CONFIG (The location of your client configuration files.) export function run(connection: ContainerConnection): any { var defer = Q.defer<any>(); connection.setDockerConfigEnvVariable(); defer.resolve(null); return defer.promise; } and what logout does ;) export function run(connection: ContainerConnection): any { // logging out is being handled in connection.close() method, called after the command execution. var defer = Q.defer<any>(); defer.resolve(null); return <Q.Promise<any>>defer.promise; } So how does it work? // Connect to any specified container registry let connection = new ContainerConnection(); connection.open(null, registryAuthenticationToken, true, isLogout); let dockerCommandMap = { "buildandpush": "./dockerbuildandpush", "build": "./dockerbuild", "push": "./dockerpush", "login": "./dockerlogin", "logout": "./dockerlogout" } let telemetry = { command: command, jobId: tl.getVariable('SYSTEM_JOBID') }; console.log("##vso[telemetry.publish area=%s;feature=%s]%s", "TaskEndpointId", "DockerV2", JSON.stringify(telemetry)); /* tslint:disable:no-var-requires */ let commandImplementation = require("./dockercommand"); if (command in dockerCommandMap) { commandImplementation = require(dockerCommandMap[command]); } let resultPaths = ""; commandImplementation.run(connection, (pathToResult) => { resultPaths += pathToResult; }) /* tslint:enable:no-var-requires */ .fin(function cleanup() { if (command !== "login") { connection.close(true, command); } }) Starting build command you will connect to container registry run command close connection (if this is not a login command) And this is what close connection does: If registry info is present, remove auth for only that registry. (This can happen for any command - build, push, logout etc.) Else, remove all auth data. (This would happen only in case of logout command. For other commands, logout is not called.) Answering your question, you can live without login and logout command.
Firebase Authentication only allows the users to identify themselves. What you're describing is limiting what users are allowed to use your app, which is known as authorization, and Firebase Authentication doesn't handle that. Luckily you tagged with firebase-realtime-database too, and authorization is definitely built into that. What I'd usually do is create a top-level node in the database that contains the UID of users that are allowed to use the app: "allowedUsers": { "uidOfUser1": true, "uidOfUser2": true ... } Then in other security rules you'll check if the user's UID is in this list before allowing them access to data, with something like: { "rules": { "employees": { ".read": "root.child('allowedUsers').child(auth.uid).exists()", "$uid": { ".read": "auth.uid === $uid && root.child('allowedUsers').child(auth.uid).exists()" } } } } With these rules: Allowed users that are signed in can read all employee data. But they can only modify their own employee data. Of course you'll want to modify these rules to fit your own requirements, but hopefully this allows you to get started. A few things to keep in mind: The UID is only created once the users sign up in Firebase Authentication. This means you may have to map from the email addresses you have to the corresponding UIDs. You can either do this by precreating all users, do it in batches, or use a Cloud Function that triggers on user creation to add the known users to the allowedUsers list. Alternative you can store the list of email addresses in the database. Just keep in mind that somebody could sign in with somebody else's email address, unless you require email address verification. Oh, and you can't store the email address as a key in the database as-is, since . characters are not allowed, so you'll have to do some form of encoding on that. Also see: How do I lock down Firebase Database to any user from a specific (email) domain? (also shows how to check for email verification) Restrict Firebase users by email (shows using encoded email addresses) firebase realtime db security rule to allow specific users How to disable Signup in Firebase 3.x Firebase - Prevent user authentication
What I want is The user logs in with his registered email and password and then logs out. When he goes in for login via Google, there needs to be an error toast preventing him from signing in via Google if the same email address has been registered already. Basically, if his email address has been registered, then he can log in only via email and password authentication and not via Google. The flow for solving this problem is to ask the user for the email address from the beginning. Once you have the email address you can check if the user has already an account or not. Assuming that you have distinct buttons for each authentication provider you can display or hide them according to what the user has selected for authentication first time. For instance, if the user has selected the authentication with email and password, check that using: auth.fetchSignInMethodsForEmail(email).addOnCompleteListener(signInMethodsTask -> { if (signInMethodsTask.isSuccessful()) { List<String> signInMethods = signInMethodsTask.getResult().getSignInMethods(); for (String signInMethod : signInMethods) { switch (signInMethod) { case GoogleAuthProvider.PROVIDER_ID: googleSignInButton.setVisibility(VISIBLE); facebookSignInButton.setVisibility(GONE); passwordSignInButton.setVisibility(GONE); break; case FacebookAuthProvider.PROVIDER_ID: googleSignInButton.setVisibility(GONE); facebookSignInButton.setVisibility(VISIBLE); passwordSignInButton.setVisibility(GONE); break; case EmailAuthProvider.PROVIDER_ID: googleSignInButton.setVisibility(GONE); facebookSignInButton.setVisibility(GONE); passwordSignInButton.setVisibility(VISIBLE); break; default: googleSignInButton.setVisibility(VISIBLE); facebookSignInButton.setVisibility(VISIBLE); passwordSignInButton.setVisibility(VISIBLE); break; } } } }); In the case of EmailAuthProvider.PROVIDER_ID, hide the other buttons and display only the button that provides the sign-in with email and password. If the user is new, display all buttons so the user can choose one or the other authentication options. P.S. There is no need to let the user choose to sign-in with another provider if you only want to let the user sign-in with a particular one.
First As per your description, I created an endpoint to register a user (a POST to /users/). At first I was getting a "Authentication credentials were not provided." if I tried sending a request using Postman (on Django API GUI it would work normally I guess because they already send the correct authentication). You have to understand that since the api is a user registraion api, the permission class should always be set as permission_class = (AllowAny,), but you set permission_class = (IsAuthenticated,) in your view, so django expecting a proper authentication credential(a JWT token as you are using JWT) to make sure the requested user is authenticated. Thats why you are getting a "Authentication credentials were not provided." exception in your POST /users/ api. Second, as you said later, However, when I think about it, it comes to me that he doesn't have the credentials yet since he's not registered and logged in, so its JWT wasn't created, so I added permission_classes = (AllowAny, ) to my APIView its obvious when a user registering himself/herself, then he/she will not have any credentials(JWT token). then you said, But then it allows anyone to use the API, therefore anyone would be able to send a PATCH request to update some info without sending the JWT in the request. From these lines it seems that you are using single api view to Create(POST) and partial update(PATCH) of user. What you have to do is to make separate api views. That is one api view for Create/Register(POST) user and set permission_classes = (AllowAny, ) and another api view to Update(PATCH) user and set permission_class = (IsAuthenticated,). I think this will solve your problem. EDITION: Now for better understanding how permission works in django rest framework, lets check this the way permission works in django rest framework.
Here's an attempt at explanation - Q1) Are these WAS_xxxxx cookie is like LtpaToken2 ? A1) Yes. Q2) Why two different cookies? A2) When two different servers issue a cookie of the same name, the 2nd one replaces the first, then the user would get an unexpected authentication challenge when they went back to the first server. The use of different names avoids this problem, but it also prevents transparently load-balancing across multiple identical servers without a trip back to the provider. Q6) Do we have a single logout endpoint ? A6) OIDC does not have distributed logout, but SAML does. However when a shared cookie is used, the effect of server "a" deleting the cookie would also block access to server b, c, d, etc. Q3) Can we avoid these OP redirects while moving between different servers (can we store the customer details in cookies or somewhere )? A3) Yes, although a different kind of cookie needs to be used. LTPA cookies don't hold full information about users, the server has to call back to the user registry (such as LDAP) to get it. But when using a remote identity provider such as OIDC, that is not possible. The LTPA cookie can be replaced with a JWT cookie by adding the jwtsso-1.0 feature. Then sharing a cookie across servers without going back to the provider becomes possible. Q4) How AJAX call works, assume like i logged in into SeverA, but making AJAX call to ServerB. A4) Once #3 is done, #4 becomes possible, but the servers' CORS settings might need tuning. Q5) Do we need to expose the each server oidcclient endpoint ? A5) Using a JWT cookie should be self contained in most cases once you have it, but all clients still need to be able to communicate with the provider in case they are the one that starts the login process. To implement #3 use Liberty 20.004 or higher, add this feature: <feature>jwtsso-1.0</feature> and add this attribute to the openIdConnectClient xml element: includeCustomCacheKeyInSubject="false" If the servers are not completely identical (i.e. Docker containers) then you'll need a little more customization so they build identical jwt's despite having different hostnames and/or ports: <jwtSso jwtBuilderRef="myBuilder" /> <jwtBuilder id="myBuilder" issuer="https://localhost:9443/jwt" jwkEnabled="false" /> <mpJwt id="myMpJwt" issuer="https://localhost:9443/jwt" /> You'll see a new cookie called JWT. To work between multiple servers, their keystore (used to sign the JWT) and truststore (used to read it) must be configured the same. Sharing the same key.p12 file across all servers is one way to do this. I hope this is helpful.
Problem: Navicat uses a different known_hosts file than the operating system does. So updating ~/.ssh/known_hosts doesn't affect the Navicat connection to the remote server. Solution: Optional workaround: in Navicat, edit the database connection on the SSH tab, change the Host field from a domain to the new IP address Fix: in Terminal, run sudo find ~ -name known_hosts the results will include something like (using Navicat Essentials for PostgreSQL as an example): /Users/<user>/Library/Containers/com.prect.NavicatEssentialsForPostgreSQL12/Data/.ssh/known_hosts edit that file and remove the line starting with the domain for your remote server return to Navicat and click Test Connection again. The connection should work. If you see the error: Access denied for 'publickey'. Authentication that can continue: publickey,password (101203) or similar, check the Authentication Method selection and, if you're using 'Public Key' or 'Password and Public Key', click the 'Private Key' file navigator and re-select one of the private keys matching a public key that you've added to the remote server.
Creating a user in Firebase is (like most calls involving cloud-based APIs) an asynchronous operation, which may take time to complete. By the time your return firebaseAuth.currentUser code runs, the user creation has not been completed yet. If you simply the code to run in a single block, with a completion handler, you'll see that it works fine: firebaseAuth.createUserWithEmailAndPassword(email, password) .addOnCompleteListener(this) { task -> if (task.isSuccessful) { val user = auth.currentUser val profileUpdates = UserProfileChangeRequest.Builder().apply { displayName = name }.build() user?.updateProfile(profileUpdates)?.addOnCompleteListener { task -> if (task.isSuccessful) { Timber.d("User profile updated.") } } } else { // If sign in fails, display a message to the user. Log.w(TAG, "createUserWithEmail:failure", task.exception) Toast.makeText(baseContext, "Authentication failed.", Toast.LENGTH_SHORT).show() updateUI(null) } // ... } Also see: getContactsFromFirebase() method return an empty list Firebase retrieve data Null outside method Retrieve String out of addValueEventListener Firebase Setting Singleton property value in Firebase Listener (which shows using semaphores to make the code block, but which didn't work on Android when I last tried it).