text
stringlengths
64
2.99M
www.laravel.com/docs/5.1/authentication#resetting-passwords Refer there for detailed methods. Assuming you havent made any modification to the laravel installation. it would be easy to make it. User Model Editing implement "Illuminate\Contracts\Auth\CanResetPassword" on App\User Model. Database Table Migration run "php artisan migrate" command on console. Routes add these routes to routes.php file. // Password reset link request routes... Route::get('password/email', 'Auth\PasswordController@getEmail'); Route::post('password/email', 'Auth\PasswordController@postEmail'); // Password reset routes... Route::get('password/reset/{token}', 'Auth\PasswordController@getReset'); Route::post('password/reset', 'Auth\PasswordController@postReset'); View Files Go to Resources/views/auth and make two new files called password.blade.php and reset.blade.php password.blade.php content => "http://pastebin.com/RkcFU130" reset.blade.php content => "http://pastebin.com/6E5Kjqc4" Email View now make a new file called password.blade.php at resources/views/emails/password.blade.php. paste this inside it. Click here to reset your password: {{ url('password/reset/'.$token) }} Post Reset Redirection if you want to redirect user to a specific url. you can paste this code to Passwordcontroller.php. replace "dashboard" with the link you need to redirect to. protected $redirectTo = '/dashboard'; thats all :)
I think your problem is: enctype="multipart/form-data" If you're using a form service sometimes the HTML form tags cause problems. I would try it just using: <input type="file" name="myfile" required> <button type="submit">Upload</button> I would definitely still use some type of form, just thought this would be quick to debug why it is failing. I would use the angular 2 form library. Also, here is a multipart upload service that I use: import { Injectable } from '@angular/core'; @Injectable() export class UploadService { public makeFileRequest(url: string, params: Array<string>, files: Array<File>) { return new Promise((resolve, reject) => { let formData: any = new FormData(); let xhr = new XMLHttpRequest(); for(let i =0; i < files.length; i++) { formData.append('file', files[i], files[i].name); } xhr.onreadystatechange = () => { if (xhr.readyState === 4) { if (xhr.status === 200) { resolve(xhr.response); } else { reject(xhr.response); } } }; let bearer = 'Bearer ' + localStorage.getItem('currentUser'); xhr.open('POST', url, true); xhr.setRequestHeader('Authorization', bearer); xhr.send(formData); }); } } you only need the auth if you're using JWT authentication. If you are not, you want to take out these lines: let bearer = 'Bearer ' + localStorage.getItem('currentUser'); xhr.setRequestHeader('Authorization', bearer);
https://github.com/king-julien/spring-oauth2-customfilter Here is a working sample with Authorization and Resource Server. This Resource Server (vanilla) is a basic stateless application which will not proceed any further until you accept Terms of Service (to accept TOS, Just a do a POST on /tos end point) after authentication. Create a filter @Component public class TosFilter extends OncePerRequestFilter{ @Override protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws ServletException, IOException { System.out.println(request.getRequestURI()); // In realworld scenario HelloWorldController.acceptedTOS is a persisted value rather than a static variable if(!HelloWorldController.acceptedTOS){ //response.sendRedirect("/no-tos"); request.getRequestDispatcher("error-no-tos").forward(request, response); } filterChain.doFilter(request,response); } } Register that filter @Configuration public class SecurityConfig extends WebSecurityConfigurerAdapter { @Autowired TosFilter rolesFilter; @Override public void configure(HttpSecurity httpSecurity) throws Exception{ httpSecurity .addFilterAfter(rolesFilter, AbstractPreAuthenticatedProcessingFilter.class) .csrf().disable() .authorizeRequests().anyRequest().permitAll(); } } Annotate your main with @EnableResourceServer. @SpringBootApplication @EnableResourceServer public class Application { public static void main(String[] args) { SpringApplication.run(Application.class, args); } }
I am not exactly sure if I understood you correctly but the following worked for me. See https://jfconavarrete.wordpress.com/2014/09/15/make-spring-security-context-available-inside-a-hystrix-command/ Basically the tutorial shows how to setup / augment hystrix with an additional "plugin" so the security context is made available inside hystrix wrapped calls via a threadlocal variable With this setup all you need to do is define a feign request interceptor like so: @Bean public RequestInterceptor requestTokenBearerInterceptor() { return new RequestInterceptor() { @Override public void apply(RequestTemplate requestTemplate) { Authentication authentication = SecurityContextHolder.getContext().getAuthentication(); OAuth2AuthenticationDetails details = (OAuth2AuthenticationDetails) authentication.getDetails(); requestTemplate.header("Authorization", "Bearer " + details.getTokenValue()); } }; } With this setup the token contained in the request is made available to the feign request interceptor so you can set the Authorization header on the feign request with the token from your authenticated user. Also note that with this approach you can keep your SessionManagementStrategy "STATELESS" as no data has to be "stored" on the server side
You have there a couple of questions so lets taken them one by one. What is the purpose of password grant type (ROPC) in OAuth2? The big objective of this grant type is to provide a seamless migration to OAuth 2.0 for application that were storing the username and password of the end-users as a way to access other resources on their behalf. Storing user passwords is a big no no, so having a quick migration step is one good way to ensure developers will move to OAuth 2.0. ... what is the advantage of using client_secret over sending username and password without client_secret? The username and password serves the purpose of authenticating the end-user; that is, to be sure that the request comes from the user with a specific identity. The client secret, has a similar purpose, it's used to authenticate the client application itself. The advantage is that you can trust that the request is being issued from a known and trusted client. Mostly useful if being able to securely differentiate between more than one client is a requirement. In relation to using a client secret in a native application that someone can just decompile and get the secret, you're correct in considering this worthless because you can't trust that type of client authentication. However, OAuth2 only requires the client secret to be used for confidential clients, which is not the case for a native application incapable of securely maintaining a client secret. In this case you perform ROPC without client credentials/secret. This possibility is illustrated in the example tutorial from Auth0 about how you can perform a ROPC grant type request. As you can see in the following snippet it does make use of the client secret parameter as it assumes this is a non-confidential client: var options = { method: 'POST', url: 'https://YOUR_AUTH0_DOMAIN/oauth/token', headers: { 'content-type': 'application/json' }, body: { grant_type: 'password', username: '[email protected]', password: 'pwd', audience: 'https://someapi.com/api', scope: 'read:sample', client_id: 'XyD....23S' }, json: true };
Check Cookies vs Tokens: The Definitive Guide for a good summary on the characteristics of traditional cookie-based authentication systems and the more recent token-based system. TL;DR Tokens-based authentication is more relevant than ever. We examine the differences and similarities between cookie and token-based authentication, advantages of using tokens, and address common questions and concerns developers have regarding token-based auth. I'm not a big fan of this exact terminology because what you actually place within a cookie can also be considered a token; most of the times it's a by-reference token that maps to some server-side data while the so called token-based authentications favors by-value tokens (JWT - Learn JSON Web Tokens) that carry the data within the token itself. JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. This information can be verified and trusted because it is digitally signed. The validation of these by-value tokens is accomplished by signatures that ensure that the token was created by the entity holding the associated key used during signing and that the contents cannot be tampered by anyone else without knowledge of the key. This premise is the foundation to trust the received tokens. In relation to CSRF, it's true that a token-based system will mitigate this because to the contrary to what happens with cookies, the browser will not automatically send these token credentials (assumes tokens are not included in the request as cookies). Imagine the following, application CK exposes resources protected with session cookies and application TK exposes resources protected with tokens. User X authenticates in both applications and as such will be issued a session cookie for application CK and a token for application TK. If an attacker creates an evil site EV and tricks user X into visit it, it can perform automatic requests to both application CK and TK from within the user's browser. However, for application CK the browser of user X will automatically include the session cookie and as such evil site EV just accessed a protected resource, while for the request to application TK the browser will not include the token automatically.
Looks like it is possible with OpenSSH 6.7 - it supports unix socket forwarding. We could start secondary ssh-agent with specific keys and forward it's socket to remote host. Unfortunately this version is not available for my server/client systems at the time of writing. I have found another possible solution, using socat and standard SSH TCP forwarding. Idea On local host we run secondary ssh-agent with only keys we want to see on remote host. On local host we set up forwarding of TCP connections on some port (portXXX) to secondary ssh-agent's socket. On remote host we set up forwarding from some socket to some TCP port (portYYY). Then we establish ssh connection with port forwarding from remote's portYYY to local portXXX. Requests to ssh agent go like this: local ssh-agent (secondary) ^ | v /tmp/ssh-.../agent.ZZZZZ - agent's socket ^ | (socat local) v localhost:portXXX ^ | (ssh port forwarding) v remote's localhost:portYYY ^ | (socat remote) v $HOME/tmp/agent.socket ^ | (requests for auth via agent) v SSH_AUTH_SOCK=$HOME/tmp/agent.socket ^ | (uses SSH_AUTH_SOCK variable to find agent socket) v ssh Drawbacks It is not completely secure, because ssh-agent becomes partially available through TCP: users of remote host can connect to your local agent on 127.0.0.1:portYYY, and other users of your local host can connect on 127.0.0.1:portXXX. But they will see only limited set of keys you manually added to this agent. And, as AllenLuce mentioned, they can't grab it, they only could use it for authentication while agent is running. socat must be installed on remote host. But looks like it is possible to simply upload precompiled binary (I tested it on FreeBSD and it works). No automation: keys must be added manually via ssh-add, forwarding requires 2 extra processes (socat) to be run, multiple ssh connections must be managed manually. So, this answer is probably just a proof of concept and not a production solution. Let's see how it can be done. Instruction Client side (where ssh-agent is running) Run new ssh-agent. It will be used for keys you want to see on remote host only. $ ssh-agent # below is ssh-agent output, DO NOT ACTUALLY RUN THESE COMMANDS BELOW SSH_AUTH_SOCK=/tmp/ssh-qVnT0UsgV6yO/agent.22982; export SSH_AUTH_SOCK; SSH_AGENT_PID=22983; export SSH_AGENT_PID; It prints some variables. Do not set them: you will loose your main ssh agent. Set another variable with suggested value of SSH_AUTH_SOCK: SSH_AUTH_SECONDARY_SOCK=/tmp/ssh-qVnT0UsgV6yO/agent.22982 Then establish forwarding from some TCP port to our ssh-agent socket locally: PORT=9898 socat TCP4-LISTEN:$PORT,bind=127.0.0.1,fork UNIX-CONNECT:$SSH_AUTH_SECONDARY_SOCK & socat will run in background. Do not forget to kill it when you're done. Add some keys using ssh-add, but run it with modified enviromnent variable SSH_AUTH_SOCK: SSH_AUTH_SOCK=$SSH_AUTH_SECONDARY_SOCK ssh-add Server side (remote host) Connect to remote host with port forwarding. Your main (not secondary) ssh agent will be used for auth on hostA (but will not be available from it, as we do not forward it). home-host$ PORT=9898 # same port as above home-host$ ssh -R $PORT:localhost:$PORT userA@hostA On remote host establish forwarding from ssh-agent socket to same TCP port as on your home host: remote-host$ PORT=9898 # same port as on home host remote-host$ mkdir -p $HOME/tmp remote-host$ SOCKET=$HOME/tmp/ssh-agent.socket remote-host$ socat UNIX-LISTEN:$SOCKET,fork TCP4:localhost:$PORT & socat will run in background. Do not forget to kill it when you're done. It does not automatically exit when you close ssh connection. Connection On remote host set enviromnent variable for ssh to know where agent socket (from previous step) is. It can be done in same ssh session or in parallel one. remote-host$ export SSH_AUTH_SOCK=$HOME/tmp/ssh-agent.socket Now it is possible to use secondary agent's keys on remote host: remote-host$ ssh userB@hostB # uses secondary ssh agent Welcome to hostB!
Django Simple History is an excellent app that I've used in production projects in the past, it will give you per model Audits against your users. Furthermore, you should create your own Authentication Class which will be responsible for logging requests. Let's assume that a User uses a Token to authenticate with your API. It gets sent in the header of each HTTP Request to your API like so: Authorization: Bearer <My Token>. We should then log the User associated with the request, the time, the user's IP and the body. This is pretty easy: settings.py REST_FRAMEWORK = { 'DEFAULT_AUTHENTICATION_CLASSES': ( 'common.authentication.MyTokenAuthenticationClass' ), ... } common/authentication.py from django.utils import timezone from django.utils.translation import ugettext_lazy as _ from ipware.ip import get_real_ip from rest_framework import authentication from rest_framework import exceptions from accounts.models import Token, AuditLog class MyTokenAuthenticationClass(authentication.BaseAuthentication): def authenticate(self, request): # Grab the Athorization Header from the HTTP Request auth = authentication.get_authorization_header(request).split() if not auth or auth[0].lower() != b'bearer': return None # Check that Token header is properly formatted and present, raise errors if not if len(auth) == 1: msg = _('Invalid token header. No credentials provided.') raise exceptions.AuthenticationFailed(msg) elif len(auth) > 2: msg = _('Invalid token header. Credentials string should not contain spaces.') raise exceptions.AuthenticationFailed(msg) try: token = Token.objects.get(token=auth[1]) # Using the `ipware.ip` module to get the real IP (if hosted on ElasticBeanstalk or Heroku) token.last_ip = get_real_ip(request) token.last_login = timezone.now() token.save() # Add the saved token instance to the request context request.token = token except Token.DoesNotExist: raise exceptions.AuthenticationFailed('Invalid token.') # At this point, insert the Log into your AuditLog table: AuditLog.objects.create( user_id=token.user, request_payload=request.body, # Additional fields ... ) # Return the Authenticated User associated with the Token return (token.user, token)
Below is a sample FedEx Label Request with Dry Ice which works. Credentials and address information have been removed. Dry ice info goes in RequestedShipment/RequestedPackageLineItems/SpecialServicesRequested While the Documentation also says to put it in RequestedShipment/SpecialServicesRequested I found that doing so would always lead to the following error 8616 (Dry Ice cannot be entered at the shipment level.) Also note that this package also has SIGNATURE_OPTION enabled. The ordering of these and any other SpecialServiceTypes is extremely important. If you are using other Package level Special Services and are getting a Schema validation failed for request error, you may need to re-order the fields or contact FedEx support for help in the ordering. DRY_ICE must always be the first Special Service Type and the DryIceWeight element must come after the list of special services, but before any of the extra elements those Services require. Sample Dry Ice shipment request: <ns:ProcessShipmentRequest xmlns:ns="http://fedex.com/ws/ship/v15" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://fedex.com/ws/ship/v15 ShipService v15.xsd"> <ns:WebAuthenticationDetail> <ns:UserCredential> <ns:Key></ns:Key> <ns:Password></ns:Password> </ns:UserCredential> </ns:WebAuthenticationDetail> <ns:ClientDetail> <ns:AccountNumber></ns:AccountNumber> <ns:MeterNumber></ns:MeterNumber> </ns:ClientDetail> <ns:TransactionDetail> <ns:CustomerTransactionId>CreatePendingRequest</ns:CustomerTransactionId> </ns:TransactionDetail> <ns:Version> <ns:ServiceId>ship</ns:ServiceId> <ns:Major>15</ns:Major> <ns:Intermediate>0</ns:Intermediate> <ns:Minor>0</ns:Minor> </ns:Version> <ns:RequestedShipment> <ns:ShipTimestamp>2016-10-25T11:03:40-07:00</ns:ShipTimestamp> <ns:DropoffType>REGULAR_PICKUP</ns:DropoffType> <ns:ServiceType>PRIORITY_OVERNIGHT</ns:ServiceType> <ns:PackagingType>YOUR_PACKAGING</ns:PackagingType> <ns:Shipper> <ns:Contact> <ns:CompanyName>Name</ns:CompanyName> <ns:PhoneNumber>Phone</ns:PhoneNumber> </ns:Contact> <ns:Address> <ns:StreetLines>Street</ns:StreetLines> <ns:StreetLines>Street</ns:StreetLines> <ns:City>City</ns:City> <ns:StateOrProvinceCode>CA</ns:StateOrProvinceCode> <ns:PostalCode>ZIP</ns:PostalCode> <ns:CountryCode>US</ns:CountryCode> </ns:Address> </ns:Shipper> <ns:Recipient> <ns:Contact> <ns:PersonName>Name</ns:PersonName> <ns:PhoneNumber>Phone</ns:PhoneNumber> </ns:Contact> <ns:Address> <ns:StreetLines>123 MAIN STREET</ns:StreetLines> <ns:StreetLines>MAIL SLOT 45</ns:StreetLines> <ns:City>City</ns:City> <ns:StateOrProvinceCode>CA</ns:StateOrProvinceCode> <ns:PostalCode>Xip</ns:PostalCode> <ns:CountryCode>US</ns:CountryCode> </ns:Address> </ns:Recipient> <ns:ShippingChargesPayment> <ns:PaymentType>SENDER</ns:PaymentType> <ns:Payor> <ns:ResponsibleParty> <ns:AccountNumber></ns:AccountNumber> <ns:Contact> <ns:CompanyName>Name</ns:CompanyName> </ns:Contact> <ns:Address> <ns:CountryCode>US</ns:CountryCode> </ns:Address> </ns:ResponsibleParty> </ns:Payor> </ns:ShippingChargesPayment> <ns:SpecialServicesRequested> </ns:SpecialServicesRequested> <ns:LabelSpecification> <ns:LabelFormatType>COMMON2D</ns:LabelFormatType> <ns:ImageType>ZPLII</ns:ImageType> <ns:LabelStockType>STOCK_4X6</ns:LabelStockType> <ns:LabelPrintingOrientation>TOP_EDGE_OF_TEXT_FIRST</ns:LabelPrintingOrientation> <ns:PrintedLabelOrigin> <ns:Contact> <ns:CompanyName>Company</ns:CompanyName> <ns:PhoneNumber>Phone</ns:PhoneNumber> </ns:Contact> <ns:Address> <ns:StreetLines>Street</ns:StreetLines> <ns:City>City</ns:City> <ns:StateOrProvinceCode>CA</ns:StateOrProvinceCode> <ns:PostalCode>Zip</ns:PostalCode> <ns:CountryCode>US</ns:CountryCode> </ns:Address> </ns:PrintedLabelOrigin> </ns:LabelSpecification> <ns:RateRequestTypes>LIST</ns:RateRequestTypes> <ns:PackageCount>1</ns:PackageCount> <ns:RequestedPackageLineItems> <ns:SequenceNumber>1</ns:SequenceNumber> <ns:Weight> <ns:Units>LB</ns:Units> <ns:Value>8</ns:Value> </ns:Weight> <ns:Dimensions> <ns:Length>5</ns:Length> <ns:Width>5</ns:Width> <ns:Height>4</ns:Height> <ns:Units>IN</ns:Units> </ns:Dimensions> <ns:CustomerReferences> <ns:CustomerReferenceType>CUSTOMER_REFERENCE</ns:CustomerReferenceType> <ns:Value>CD0000002199</ns:Value> </ns:CustomerReferences> <ns:CustomerReferences> <ns:CustomerReferenceType>P_O_NUMBER</ns:CustomerReferenceType> <ns:Value>0000497600</ns:Value> </ns:CustomerReferences> <ns:SpecialServicesRequested> <ns:SpecialServiceTypes>DRY_ICE</ns:SpecialServiceTypes> <ns:SpecialServiceTypes>SIGNATURE_OPTION</ns:SpecialServiceTypes> <ns:DryIceWeight> <ns:Units>KG</ns:Units> <ns:Value>2.5</ns:Value> </ns:DryIceWeight> <ns:SignatureOptionDetail> <ns:OptionType>DIRECT</ns:OptionType> </ns:SignatureOptionDetail> </ns:SpecialServicesRequested> </ns:RequestedPackageLineItems> </ns:RequestedShipment> </ns:ProcessShipmentRequest>
@numan's answer provides a good explanation of the necessary process to ensure confidentiality, integrity, and authentication. But this doesn't answer a real question. The goal of a Digital Signature is to provide these basic services, Authenticity: Sender has signed the data as he claimed (Data have to be encrypted using sender's private key). Integrity: To provide a guarantee that the data has not changed from the time it was signed. Nonrepudiation: The receiver can provide the data to some third party which can accept the digital signature as proof that the data exchange did take place. Besides, the sender (signing party) cannot refuse that it has signed the data. and it has properties to ensure authenticity and integrity, such as, The signature is not forgeable: Provides proof that the signer, and no one else, signed the document. Signature cannot be repudiated: which means, for legal purposes, the signature and the document are considered physical things. Signers cannot claim later that they did not sign it. Signature is unaltered: After a document is signed, it cannot be altered. Signature is not reusable: The signature is part of the document and cannot be moved to a different document. While on the other hand, a Digital certificate is issued by some third-party Certificate Authority (CA) to verify the identity of the certificate holder. It actually contains Certification authority's digital signature that is derived from CA's own private key. It also contains the public key that is associated with the owner of the digital certificate. You may want to read about how Digital Certificates are strucutred.
AuthenticationEntryPoint is a functional interface (a interface containing only one public method: commence). Functional interfaces implementations can be created using Java Lambda expressions. In a pre java 8 programming style you could use an anonymous class: @Bean public AuthenticationEntryPoint unauthorizedEntryPoint() { AuthenticationEntryPoint entryPoint = new AuthenticationEntryPoint() { @Override public void commence(HttpServletRequest request, HttpServletResponse response, AuthenticationException authException) throws IOException, ServletException { response.sendError(HttpServletResponse.SC_UNAUTHORIZED); } }; return entryPoint; } Here we create an AuthenticationEntryPoint anonymous class in which we implement the behaviour of AuthenticationEntryPoint.commence(). Java 8 lambda expressions provide syntactic sugar to reduce the code to just: return (request, response, authException) -> response.sendError(HttpServletResponse.SC_UNAUTHORIZED); request, response, authException will be provided to the method when called. More info here: https://docs.oracle.com/javase/tutorial/java/javaOO/lambdaexpressions.html
The API has changed to use a middleware system. The SEA (Security, Encryption, Authorization) framework will be published to handle stuff like this. However, you can roll your own by doing something like this on the server: Gun.on('opt', function(ctx){ if(ctx.once){ return } ctx.on('in', function(msg){ var to = this.to; // process message. to.next(msg); // pass to next middleware }); }); Registering the in listener via the opt hook lets this middleware become 1st in line (before even gun core), that way you can filter all inputs and reject them if necessary (by not calling to.next(msg)). Likewise to add headers on the client you would want to register an out listener (similarly to how we did for the in) and modify the outgoing message to have msg.headers = {token: data} and then pass it forward to the next middleware layers (which will probably be websocket/transport hooks) by doing to.next(msg) as well. More docs to come on this as it stabilizes. Old Answer: A very late answer, sorry this was not addressed sooner: The default websocket/ajax adapter allows you to update a headers property that gets passed on every networked message: gun.opt({ headers: { token: JWT }, }); On the server you can then intercept and reject/authorize requests based on the token: gun.wsp(server, function(req, res, next){ if('get' === req.method){ return next(req, res); } if('put' === req.method){ return res({body: {err: "Permission denied!"}}); } }); The above example rejects all writes and authorizes all reads, but you would replace this logic with your own rules.
I believe the license you purchased to use Jira gives you access to the api without further cost. First steps? The second link you gave in your post relating to the api (docs.atlassian.com/jira/REST/cloud/) gives you everything you need to know if you understand its content. Googling nodejs jira api gave a number of package results that would make interacting with the api very easy. At the time node-jira was top of the list and looked like it suited your needs. There are other packages too so worth looking around. General pointers: Start on a list of nodejs packages you will need to build your app from what you know and package searches. Initialize your node project and start adding those packages to package.json. Identify the Jira authentication method you are going to use. The api supports basic over https or oauth and cookie once authenticated. Find examples of how the package you are using handles authentication. It should be easy in the package readme or with google. Identify the API calls that will give you the data you need. The options are easy to find in node-jira readme if using it or use the api docs. The jira api documentation will give you the expected json response schema that you will need to access the json you get back. An example would be the Projects api definition. It gives you an example response and the full response schema. The api options are described as 'expandable' which means you only get what you ask for, if you want more you have to ask for it. (see expand option for each api call) Consider what you need to process the data you get back and display it in whatever format you require. Again more package options, json processing, templating. If it is a web page you might need something like express. Use that information to start coding (not in any specific order). Code for getting requests (say a web page). Code for authentication and api calls. Code for templating each data view of api response data. Code the overall app structure. Give yourself some debug messages that can be turned on and off so you can see process sequence which can help a lot in troubleshooting. Write test scripts! Change code.... run the test/s, got a new feature... write a test then code to the test. Retest before release. There are lots of package options, information, and examples. Use Google lots, search npmjs.com for packages, use the api docs.
You can try this code: public class LoginActivity extends AppCompatActivity implements GoogleApiClient.OnConnectionFailedListener, View.OnClickListener { private static final String TAG = "SignInActivity"; private static final int RC_SIGN_IN = 9001; private GoogleApiClient mGoogleApiClient; private FirebaseAuth mAuth; private FirebaseAuth.AuthStateListener mAuthListener; private CallbackManager mCallbackManager; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_login); // Facebook Login FacebookSdk.sdkInitialize(getApplicationContext()); mCallbackManager = CallbackManager.Factory.create(); LoginButton mFacebookSignInButton = (LoginButton) findViewById(R.id.facebook_button); mFacebookSignInButton.setReadPermissions("email", "public_profile", "user_birthday", "user_friends"); mFacebookSignInButton.registerCallback(mCallbackManager, new FacebookCallback<LoginResult>() { @Override public void onSuccess(LoginResult loginResult) { Log.d(TAG, "facebook:onSuccess:" + loginResult); firebaseAuthWithFacebook(loginResult.getAccessToken()); } @Override public void onCancel() { Log.d(TAG, "facebook:onCancel"); } @Override public void onError(FacebookException error) { Log.d(TAG, "facebook:onError", error); } }); // Google Sign-In // Assign fields Button mGoogleSignInButton = (Button) findViewById(R.id.google_button); // Set click listeners mGoogleSignInButton.setOnClickListener(this); GoogleSignInOptions gso = new GoogleSignInOptions.Builder(GoogleSignInOptions.DEFAULT_SIGN_IN) .requestIdToken(getString(R.string.default_web_client_id)) .requestEmail() .build(); mGoogleApiClient = new GoogleApiClient.Builder(this) .enableAutoManage(this /* FragmentActivity */, this /* OnConnectionFailedListener */) .addApi(Auth.GOOGLE_SIGN_IN_API, gso) .build(); // Initialize FirebaseAuth mAuth = FirebaseAuth.getInstance(); mAuthListener = new FirebaseAuth.AuthStateListener() { @Override public void onAuthStateChanged(@NonNull FirebaseAuth firebaseAuth) { FirebaseUser user = firebaseAuth.getCurrentUser(); if (user != null) { // User is signed in Log.d(TAG, "onAuthStateChanged:signed_in:" + user.getUid()); } else { // User is signed out Log.d(TAG, "onAuthStateChanged:signed_out"); } } }; } @Override public void onStart() { super.onStart(); mAuth.addAuthStateListener(mAuthListener); } @Override public void onStop() { super.onStop(); if (mAuthListener != null) { mAuth.removeAuthStateListener(mAuthListener); } } private void firebaseAuthWithGoogle(GoogleSignInAccount acct) { Log.d(TAG, "firebaseAuthWithGooogle:" + acct.getId()); AuthCredential credential = GoogleAuthProvider.getCredential(acct.getIdToken(), null); mAuth.signInWithCredential(credential) .addOnCompleteListener(this, new OnCompleteListener<AuthResult>() { @Override public void onComplete(@NonNull Task<AuthResult> task) { Log.d(TAG, "signInWithCredential:onComplete:" + task.isSuccessful()); // If sign in fails, display a message to the user. If sign in succeeds // the auth state listener will be notified and logic to handle the // signed in user can be handled in the listener. if (!task.isSuccessful()) { Log.w(TAG, "signInWithCredential", task.getException()); Toast.makeText(LoginActivity.this, "Authentication failed.", Toast.LENGTH_SHORT).show(); } else { startActivity(new Intent(LoginActivity.this, MainActivity.class)); finish(); } } }); } private void firebaseAuthWithFacebook(AccessToken token) { Log.d(TAG, "handleFacebookAccessToken:" + token); final AuthCredential credential = FacebookAuthProvider.getCredential(token.getToken()); mAuth.signInWithCredential(credential) .addOnCompleteListener(this, new OnCompleteListener<AuthResult>() { @Override public void onComplete(@NonNull Task<AuthResult> task) { Log.d(TAG, "signInWithCredential:onComplete:" + task.isSuccessful()); // If sign in fails, display a message to the user. If sign in succeeds // the auth state listener will be notified and logic to handle the // signed in user can be handled in the listener. if (!task.isSuccessful()) { Log.w(TAG, "signInWithCredential", task.getException()); Toast.makeText(LoginActivity.this, "Authentication failed.", Toast.LENGTH_SHORT).show(); } else { startActivity(new Intent(LoginActivity.this, MainActivity.class)); finish(); } } }); } @Override public void onClick(View v) { switch (v.getId()) { case R.id.google_button: signIn(); break; default: return; } } private void signIn() { Intent signInIntent = Auth.GoogleSignInApi.getSignInIntent(mGoogleApiClient); startActivityForResult(signInIntent, RC_SIGN_IN); } @Override public void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); mCallbackManager.onActivityResult(requestCode, resultCode, data); // Result returned from launching the Intent from GoogleSignInApi.getSignInIntent(...); if (requestCode == RC_SIGN_IN) { GoogleSignInResult result = Auth.GoogleSignInApi.getSignInResultFromIntent(data); if (result.isSuccess()) { // Google Sign In was successful, authenticate with Firebase GoogleSignInAccount account = result.getSignInAccount(); firebaseAuthWithGoogle(account); } else { // Google Sign In failed Log.e(TAG, "Google Sign In failed."); } } } @Override public void onConnectionFailed(@NonNull ConnectionResult connectionResult) { // An unresolvable error has occurred and Google APIs (including Sign-In) will not // be available. Log.d(TAG, "onConnectionFailed:" + connectionResult); Toast.makeText(this, "Google Play Services error.", Toast.LENGTH_SHORT).show(); } } Please let me know if you have any questions.
Here is my attempt to have the following working: express: 4.14 socket.io: 1.5 passport (using sessions): 0.3 redis: 2.6 (Really fast data structure to handle sessions; but you can use others like MongoDB too. However, I encourage you to use this for session data + MongoDB to store other persistent data like Users) Since you might want to add some API requests as well, we'll also use http package to have both HTTP and Web socket working in the same port. server.js The following extract only includes everything you need to set the previous technologies up. You can see the complete server.js version which I used in one of my projects here. import http from 'http'; import express from 'express'; import passport from 'passport'; import { createClient as createRedisClient } from 'redis'; import connectRedis from 'connect-redis'; import Socketio from 'socket.io'; // Your own socket handler file, it's optional. Explained below. import socketConnectionHandler from './sockets'; // Configuration about your Redis session data structure. const redisClient = createRedisClient(); const RedisStore = connectRedis(Session); const dbSession = new RedisStore({ client: redisClient, host: 'localhost', port: 27017, prefix: 'stackoverflow_', disableTTL: true }); // Let's configure Express to use our Redis storage to handle // sessions as well. You'll probably want Express to handle your // sessions as well and share the same storage as your socket.io // does (i.e. for handling AJAX logins). const session = Session({ resave: true, saveUninitialized: true, key: 'SID', // this will be used for the session cookie identifier secret: 'secret key', store: dbSession }); app.use(session); // Let's initialize passport by using their middlewares, which do //everything pretty much automatically. (you have to configure login // / register strategies on your own though (see reference 1) app.use(passport.initialize()); app.use(passport.session()); // Socket.IO const io = Socketio(server); io.use((socket, next) => { session(socket.handshake, {}, next); }); io.on('connection', socketConnectionHandler); // socket.io is ready; remember that ^this^ variable is just the // name that we gave to our own socket.io handler file (explained // just after this). // Start server. This will start both socket.io and our optional // AJAX API in the given port. const port = 3000; // Move this onto an environment variable, // it'll look more professional. server.listen(port); console.info(` API listening on port ${port}`); console.info(` Socket listening on port ${port}`); sockets/index.js Our socketConnectionHandler, I just don't like putting everything inside server.js (even though you perfectly could), especially since this file can end up containing quite a lot of code pretty quickly. export default function connectionHandler(socket) { const userId = socket.handshake.session.passport && socket.handshake.session.passport.user; // If the user is not logged in, you might find ^this^ // socket.handshake.session.passport variable undefined. // Give the user a warm welcome. console.info(`⚡︎ New connection: ${userId}`); socket.emit('Grettings', `Grettings ${userId}`); // Handle disconnection. socket.on('disconnect', () => { if (process.env.NODE_ENV !== 'production') { console.info(`⚡︎ Disconnection: ${userId}`); } }); } Extra material (client): Just a very basic version of what the JavaScript socket.io client could be: import io from 'socket.io-client'; const socketPath = '/socket.io'; // <- Default path. // But you could configure your server // to something like /api/socket.io const socket = io.connect('localhost:3000', { path: socketPath }); socket.on('connect', () => { console.info('Connected'); socket.on('Grettings', (data) => { console.info(`Server gretting: ${data}`); }); }); socket.on('connect_error', (error) => { console.error(`Connection error: ${error}`); }); References: I just couldn't reference inside the code, so I moved it here. 1: How to set up your Passport strategies: https://scotch.io/tutorials/easy-node-authentication-setup-and-local#handling-signupregistration
There's already a class that can provide claims enrichment ClaimsAuthenticationManager, which you can extend so it handles your domain-specific claims, for example... public class MyClaimsAuthenticationManager : ClaimsAuthenticationManager { public override ClaimsPrincipal Authenticate(string resourceName, ClaimsPrincipal incomingPrincipal) { if (!incomingPrincipal.Identity.IsAuthenticated) { return base.Authenticate(resourceName, incomingPrincipal); } return AddApplicationClaims(incomingPrincipal); } private ClaimsPrincipal AddApplicationClaims(ClaimsPrincipal principal) { // TODO: Add custom claims here based on current principal. return principal; } } Next task is to provide appropriate middleware to invoke this. For my projects I've written the following classes... /// <summary> /// Middleware component to apply claims transformation to current context /// </summary> public class ClaimsTransformationMiddleware { private readonly Func<IDictionary<string, object>, Task> next; private readonly IServiceProvider serviceProvider; public ClaimsTransformationMiddleware(Func<IDictionary<string, object>, Task> next, IServiceProvider serviceProvider) { this.next = next; this.serviceProvider = serviceProvider; } public async Task Invoke(IDictionary<string, object> env) { // Use Katana's OWIN abstractions var context = new OwinContext(env); if (context.Authentication != null && context.Authentication.User != null) { var manager = serviceProvider.GetService<ClaimsAuthenticationManager>(); context.Authentication.User = manager.Authenticate(context.Request.Uri.AbsoluteUri, context.Authentication.User); } await next(env); } } And then a wiring extension... public static class AppBuilderExtensions { /// <summary> /// Add claims transformation using <see cref="ClaimsTransformationMiddleware" /> any depdendency resolution is done via IoC /// </summary> /// <param name="app"></param> /// <param name="serviceProvider"></param> /// <returns></returns> public static IAppBuilder UseClaimsTransformation(this IAppBuilder app, IServiceProvider serviceProvider) { app.Use<ClaimsTransformationMiddleware>(serviceProvider); return app; } } I know this is service locator anti-pattern but using IServiceProvider is container neutral and seems to be the accepted way of putting dependencies into Owin middleware. Last you need to wire this up in your Startup, example below presumes Unity and registering/exposing a IServiceLocator property... // Owin config app.UseClaimsTransformation(UnityConfig.ServiceLocator);
This is an extremely broad question but having recently gone through all of this I believe i can provide a detailed response for how I have implemented it. This is key because there is a very large amount of options, and if you look up most of the tutorials, they mostly focus around using rails as the back end instead of node or express.js. I will be answering this question based on you using express.js. I'll preface this with remembering that ember-data is a completely different offshoot of ember that you can bypass and entirely not use if you feel your project is not going to need the features with it and just use AJAX requests instead. ember-data adds a lot of complexity and overhead to the initial start of the project. Additionally TLS/SSL is the most important security you can have and without it, any amount of attempted security outside of this is invalid without it. Now that that's out of the way, lets get to the gritty part of setting it up. By default ember-data uses the JSONAPIAdapter which is based on the JSON API specification. Your Express.js API server is going to have to be able to function to this specification if you use the default Adapter with no Serializer changes Breaking the project out into the core components and what they need to do, and the options available is the following (with what I did in bold): Express.js API server Express API routes Authentication library Passport is works well for express.js Custom Authentication mechanism Token Based Cookie Based Data Modeling Mongo Sequelize Other Ember.js based Web Server Adapter (this deals with sending/receiving data and handling errors) application.js: configure an adapter for the whole application Serializer (this deals with making the data from the adapter ember useable) None required by default Authenticator (this ember-simple-auth works well Build your own: example Authorizer ember-simple-auth-token gives you a prebuilt authorizer using token based authentication Database MongoDB (doc-based non-relational database) Redis (in memory non-relational database) MySQL (relational database) PostGreSQL (relational database) Other The basic flow is as follows: User attempts to log in on ember.js app Ember.js uses authenticator to request access from API server API server validates user and returns JSON web token in header Ember.js uses authorizer and adds JSON web token to header for future API requests API call is made to the API server from Ember through the Adapter with authorizer header API server validates token and searches for data required API server responds with data in JSON API specification format Ember.js adapter receives data and handles response Ember.js serializer receives data from adapter and makes it useable by Ember Ember data receives model data from serializer and stores it in cache Model data is populated based on templates and controllers on Ember.js pages Here's how i set it up ** Setting Ember.js up to use Express.js API Server ** Install the following items for ember-cli: ember install ember-simple-auth - For authentication ember install ember-simple-auth-token - For token-based authentication in app/adapters/application.js: import DS from 'ember-data'; import DataAdapterMixin from 'ember-simple-auth/mixins/data-adapter-mixin'; // Authenticating data from the API server import Ember from 'ember'; import ENV from '../config/environment'; export default DS.JSONAPIAdapter.extend(DataAdapterMixin,{ authManager: Ember.inject.service('session'), host: ENV.apihost, // location of the API server namespace: ENV.apinamespace, // Namespace of API server ie: 'api/v1' authorizer: 'authorizer:token', // Authorizer to use for authentication ajax: function(url, method, hash) { hash = hash || {}; // hash may be undefined hash.crossDomain = true; // Needed for CORS return this._super(url, method, hash); } }); In config/environment.js: ENV.host = 'http://localhost:4000'; /* this assumes the express.js server is running on port 4000 locally, in a production environment it would point to https://domainname.com/ */ ENV['ember-simple-auth'] = { authorizer: 'authorizer:token', //uses ember-simple-auth-token authorizer crossOriginWhitelist: ['http://localhost:4000'], // for CORS baseURL: '/', authenticationRoute: 'login', // Ember.js route that does authentication routeAfterAuthentication: 'profile', // Ember.js route to transition to after authentication routeIfAlreadyAuthenticated: 'profile' // Ember.js route to transition to if already authenticated }; ENV['ember-simple-auth-token'] = { serverTokenEndpoint: 'http://localhost:4000/auth/token', // Where to get JWT from identificationField: 'email', // identification field that is sent to Express.js server passwordField: 'password', // password field sent to Express.js server tokenPropertyName: 'token', // expected response key from Express.js server authorizationPrefix: 'Bearer ', // header value prefix authorizationHeaderName: 'Authorization', // header key headers: {}, }; ENV['apihost'] = "http://localhost:4000" // Host of the API server passed to `app/adapters/application.js` ENV['apinamespace'] = ""; // Namespace of API server passed to `app/adapters/application.js` ** Setting up Express.js Server ** Required packages: express : Self explanatory body-parser : for parsing JSON from ember.js site cors : for CORS support ejwt : for requiring JWT on most routes to your API server passport : for authenticating users passport-json : for authenticating users bcrypt : for hashing/salting user passwords sequelize : for data modeling ** Setting up server.js ** var express = require('express'); // App is built on express framework var bodyParser = require('body-parser'); // For parsing JSON passed to use through the front end app var cors = require('cors'); // For CORS support var ejwt = require('express-jwt'); var passport = require('passport'); // Load Configuration files var Config = require('./config/environment'), config = new Config // Load our Environment configuration based on NODE_ENV environmental variable. Default is test. var corsOptions = { origin: config.cors }; var app = express(); // Define our app object using express app.use(bodyParser.urlencoded({extended: true})); // use x-www-form-urlencoded used for processing submitted forms from the front end app app.use(bodyParser.json()); // parse json bodies that come in from the front end app app.use(bodyParser.json({ type: 'application/vnd.api+json' })); // THIS ALLOWS ACCEPTING EMBER DATA BECAUSE JSON API FORMAT app.use(cors(corsOptions)); // Cross-Origin Resource Sharing support app.use(passport.initialize()); // initialize passport app.use(ejwt({ secret: config.secret}).unless({path: ['/auth/token', { url : '/users', methods: ['POST']}]})); require('./app/routes')(app); // Load our routes file that handles all the API call routing app.listen(config.port); // Start our server on the configured port. Default is 4000 console.log('listening on port : ' + config.port); in config/passport.js // config/passport.js // Configure Passport for local logins // Required Modules var JsonStrategy = require('passport-json').Strategy; // var User = require('../app/models/users'); // load user model // Function module.exports = function (passport) { // serialize the user for the session passport.serializeUser(function (user, done) { done(null, user.id); }); // deserialize the user passport.deserializeUser(function (id, done) { User.findById(id).then(function (user) { done(null, user); }); }); // LOCAL LOGIN ========================================================== passport.use('json', new JsonStrategy({ usernameProp : 'email', passwordProp : 'password', passReqToCallback : true }, function (req, email, password, done) { User.findOne({where : {'email' : email }}).then(function (user) { // check against email if (!user) { User.findOne({where : {'displayName' : email}}).then(function(user){ //check against displayName if (!user) return done(null, false); else if (User.validatePassword(password,user.password)) return done(null, user); else return done(null, false); }); } else if (User.validatePassword(password,user.password)) return done(null, user); else return done(null, false); }); })); }; Example app/models/users.js user sequelize model // Load required Packages var Sequelize = require('sequelize'); var bcrypt = require('bcrypt-node') // Load required helpers var sequelize = require('../helpers/sequelizeconnect'); var config = new require('../../config/environment'); // Load our Environment configuration based on NODE_ENV environmental variable. Default is test. // Load other models // Define model var Users = sequelize.define('users', { "email": { type: Sequelize.STRING}, // user email "password": { type: Sequelize.STRING} // user password }); // Methods ======================================================= // Hash a password before storing Users.generateHash = function(password) { return bcrypt.hashSync(password, bcrypt.genSaltSync(8), null); }; // Compare a password from the DB Users.validatePassword = function(password, dbpassword) { return bcrypt.compareSync(password, dbpassword); } module.exports = Users At this point your express.js server will just need your routes.js set up with routes for what your API server needs, at a minimum of /auth/token in order to perform the authentication. An example of a successful response the Ember.js JSON API adapter expects is: var jsonObject = { // create json response object "data": { "type": "users", // ember.js model "id": 1, // id of the model "attributes": { "email" : "[email protected]", } } } res.status(201).json(jsonObject); // send new data object with 201/OK as a response There is a lot more complexities to setting up the JSON API server to respond to Delete requests, Validation errors, etc.
The Complete Solution: Hi reinierkors, I also tried to do the same with the 5.3 version, I finally solved it :) and the solution is very clean. First, I created a new folder under App\Http\Controllers\Api called Auth, I did it just to add new auth controllers for the api so I can rewrite some functions, then I copied the auth controllers (LoginController, ForgotPasswordController, RegisterController) to this new folder. In LoginController Class: I rewrited the functions that were making the redirects. The first function: will be automatically called when the authentication return success. The second function: will be automatically called when the authentication return error. The last function: will be automatically called when the user has been locked out after trying 5 login attempts. /** * Send the response after the user was authenticated. * * @param \Illuminate\Http\Request $request * @return \Illuminate\Http\Response */ protected function sendLoginResponse(Request $request) { $this->clearLoginAttempts($request); return response()->json(['SUCCESS' => 'AUTHENTICATED'], 200); } /** * Get the failed login response instance. * * @return \Illuminate\Http\Response */ protected function sendFailedLoginResponse() { return response()->json(['ERROR' => 'AUTH_FAILED'], 401); } /** * Error after determining they are locked out. * * @param \Illuminate\Http\Request $request * @return \Illuminate\Http\Response */ protected function sendLockoutResponse(Request $request) { $seconds = $this->limiter()->availableIn( $this->throttleKey($request) ); return response()->json(['ERROR' => 'TOO_MANY_ATTEMPTS', 'WAIT' => $seconds], 401); } In RegisterController Class: I rewrited the functions that were making the redirects. In the first function: I modified the validator response to return a more comfortable response (array) to work with. The second function: will be automatically called when the registration return success. /** * Handle a registration request for the application. * * @param Request $request * @return \Illuminate\Http\Response */ public function register(Request $request) { $validator = $this->validator($request->all()); if($validator->fails()) return response()->json(['ERROR' => $validator->errors()->getMessages()], 422); event(new Registered($user = $this->create($request->all()))); $this->guard()->login($user); return $this->registered($request, $user) ?: redirect($this->redirectPath()); } /** * The user has been registered. * * @param Request $request * @param mixed $user * @return mixed */ protected function registered(Request $request, $user) { return response()->json(['SUCCESS' => 'AUTHENTICATED']); } In ForgotPasswordController Class: I rewrited the function that was making the redirects. I modified the reset link email function so we can get the messages and display as json instead of the redirects. /** * Send a reset link to the given user. * * @param \Illuminate\Http\Request $request * @return \Illuminate\Http\RedirectResponse */ public function sendResetLinkEmail(Request $request) { $validator = Validator::make($request->only('email'), [ 'email' => 'required|email', ]); if ($validator->fails()) return response()->json(['ERROR' => 'VALID_EMAIL_REQUIRED'], 422); // We will send the password reset link to this user. Once we have attempted // to send the link, we will examine the response then see the message we // need to show to the user. Finally, we'll send out a proper response. $response = $this->broker()->sendResetLink( $request->only('email') ); if ($response === Password::RESET_LINK_SENT) { return response()->json(['SUCCESS' => 'EMAIL_SENT'], 200); } // If an error was returned by the password broker, we will get this message // translated so we can notify a user of the problem. We'll redirect back // to where the users came from so they can attempt this process again. return response()->json(['ERROR' => 'EMAIL_NOT_FOUND'], 401); }
First of all was your api server able to parse the follow? api/item?c%5B%5D=14%26c%5B%5D%3D74 Encoding is great for avoiding code injection to your server. This is something Refit is a bit opinionated about, i.e uris should be encoded, the server should be upgraded to read encoded uris. But this clearly should be a opt-in settings in Refit but it´s not. So you can currently can do that by using a DelegatingHandler: /// <summary> /// Makes sure the query string of an <see cref="System.Uri"/> /// </summary> public class UriQueryUnescapingHandler : DelegatingHandler { public UriQueryUnescapingHandler() : base(new HttpClientHandler()) { } public UriQueryUnescapingHandler(HttpMessageHandler innerHandler) : base(innerHandler) { } protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) { var uri = request.RequestUri; //You could also simply unescape the whole uri.OriginalString //but i don´t recommend that, i.e only fix what´s broken var unescapedQuery = Uri.UnescapeDataString(uri.Query); var userInfo = string.IsNullOrWhiteSpace(uri.UserInfo) ? "" : $"{uri.UserInfo}@"; var scheme = string.IsNullOrWhiteSpace(uri.Scheme) ? "" : $"{uri.Scheme}://"; request.RequestUri = new Uri($"{scheme}{userInfo}{uri.Authority}{uri.AbsolutePath}{unescapedQuery}{uri.Fragment}"); return base.SendAsync(request, cancellationToken); } } Refit.RestService.For<IYourService>(new HttpClient(new UriQueryUnescapingHandler()))
Usually I would provide a HttpService myself instead of using Http directly. So with your requirement, I can provide my own get() method to chain the authentication before sending any real HTTP requests. Here is the service: @Injectable() class HttpService { constructor(private http: Http, private auth: Authentication) {} public get(url: string): Observable<Response> { return this.auth.authenticate().flatMap(authenticated => { if (authenticated) { return this.http.get(url); } else { return Observable.throw('Unable to re-authenticate'); } }); } } Here is the component to call the service: @Component({ selector: 'my-app', template: `<h1>Hello {{name}}</h1> <button (click)="doSomething()">Do Something</button> <div [hidden]="!auth.showModal"> <p>Do you confirm to log in?</p> <button (click)="yes()">Yes</button><button (click)="no()">No</button> </div> `, }) export class AppComponent { name = 'Angular'; constructor(private httpSvc: HttpService, public auth: Authentication) {} ngOnInit() { } doSomething() { let a = this.httpSvc.get('hello.json').subscribe(() => { alert('Data retrieved!'); }, err => { alert(err); }); } yes() { this.auth.confirm.emit(true); } no() { this.auth.confirm.emit(false); } } By chaining observables, the Authentication service determines whether to interrupt the normal flow to show the modal (though currently only lives with the App component, it can certainly be implemented separately). And once a positive answer is received from the dialog, the service can resume the flow. class Authentication { public needsAuthentication = true; public showModal = false; public confirm = new EventEmitter<boolean>(); public authenticate(): Observable<boolean> { // do something to make sure authentication token works correctly if (this.needsAuthentication) { this.showModal = true; return Observable.create(observer => { this.confirm.subscribe(r => { this.showModal = false; this.needsAuthentication = !r; observer.next(r); observer.complete(); }); }); } else { return Observable.of(true); } } } I have a full live example here. http://plnkr.co/edit/C129guNJvri5hbGZGsHp?open=app%2Fapp.component.ts&p=preview
Let's clear a few things up but it seems that password is encrypted First, your password is hashed, not encrypted. There is a difference. Namely that hashes are meant to be one-way. There's no way to look at a hash and just regenerate the password from that. Second, they're using MD5. They're not actually salting anything, they're appending the same string to all passwords and THEN hashing it. MD5 is a terrible way to hash because it's stupid easy to break. This is the equivalent of securing your front door with a rubber band. It's not secure because you can make millions of guesses a minute. Yes it is that bad. Third, with the function and the "salt" known, you can easy make a new password this way (via SQL because I'm not guessing what sort of screwy ORM they're using there) UPDATE users SET password = MD5(CONCAT('supapongherb.com', 'new_password_here')) WHERE id = their_user_id_here Fourth, switch to password_hash. Like now. Get rid of the rubber band and upgrade to a deadbolt, with rabid pitbulls behind it and a shotgun in your lap.
I tried the session approach, but I had issues. This method worked better for me, your mileage may vary: s3 = boto3.resource('s3', config=Config(signature_version='s3v4')) You will need to import Config from botocore.client in order to make this work. See below for a functional method to test a bucket (list objects). This assumes you are running it from an environment where your authentication is managed, such as Amazon EC2 or Lambda with a IAM Role: import boto3 from botocore.client import Config from botocore.exceptions import ClientError def test_bucket(bucket): print 'testing bucket: ' + bucket try: s3 = boto3.resource('s3', config=Config(signature_version='s3v4')) b = s3.Bucket(bucket) objects = b.objects.all() for obj in objects: print obj.key print 'bucket test SUCCESS' except ClientError as e: print 'Client Error' print e print 'bucket test FAIL' To test it, simply call the method with a bucket name. Your role will have to grant proper permissions.
You cannot do that with JavaScript running in the client. See the following entry of the WebCrypto mailing list: On Wed, Jun 24, 2015 at 1:50 PM, Jeffrey Walton wrote: I see the WebCrypto API will allow discovery of keys (http://www.w3.org/TR/WebCryptoAPI/): In addition to operations such as signature generation and verification, hashing and verification, and encryption and decryption, the API provides interfaces for key generation, key derivation, key import and export, and key discovery. Certificates have public keys, and they are not as sensitive as private keys. Will the WebCrypto API allow discovery/enumeration of certificates? Examples of what I would like to discover or enumerate (in addition to the private keys): Trusted roots Client certs Trusted Roots are in the platform's trust store. Client certs may be in the trust store. Thanks in advance, Jeff There are no plans from Chrome to implement such, on the hopefully obvious and significant privacy grounds. Client certs contain PII. Trusted certs contain PII and fingerprinting. In modern, sandboxed operating systems, such as iOS and Android, applications cannot enumerate either, as those platform providers reached the same conclusion. So no. Never.1 1 For some really long value of never
private class Saml2SSOSecurityTokenResolver : SecurityTokenResolver { List<SecurityToken> _tokens; public Saml2SSOSecurityTokenResolver(List<SecurityToken> tokens) { _tokens = tokens; } protected override bool TryResolveSecurityKeyCore(System.IdentityModel.Tokens.SecurityKeyIdentifierClause keyIdentifierClause, out System.IdentityModel.Tokens.SecurityKey key) { var token = _tokens[0] as X509SecurityToken; var myCert = token.Certificate; key = null; var ekec = keyIdentifierClause as EncryptedKeyIdentifierClause; if (ekec != null) { if (ekec.EncryptionMethod == "http://www.w3.org/2001/04/xmlenc#rsa-1_5") { var encKey = ekec.GetEncryptedKey(); var rsa = myCert.PrivateKey as RSACryptoServiceProvider; var decKey = rsa.Decrypt(encKey, false); key = new InMemorySymmetricSecurityKey(decKey); return true; } var data = ekec.GetEncryptedKey(); var id = ekec.EncryptingKeyIdentifier; } return true; } protected override bool TryResolveTokenCore(System.IdentityModel.Tokens.SecurityKeyIdentifierClause keyIdentifierClause, out System.IdentityModel.Tokens.SecurityToken token) { throw new NotImplementedException(); } protected override bool TryResolveTokenCore(System.IdentityModel.Tokens.SecurityKeyIdentifier keyIdentifier, out System.IdentityModel.Tokens.SecurityToken token) { throw new NotImplementedException(); } }
I was able to send an email through CodeIgniter and msmtp without too much trouble. In my case I used Sendgrid as I encountered authentication issues using msmtp with Gmail and Yahoo. Here's my setup (running on Ubuntu 14.04, php 5.5.9, Code Igniter latest): msmtp config - /home/quickshiftin/.msmtprc account sendgrid host smtp.sendgrid.net port 587 auth on tls on tls_starttls on tls_trust_file /etc/ssl/certs/ca-certificates.crt user SendGridUsername password SendGridPassword Code Igniter Controller - application/controller/Tools.php class Tools extends CI_Controller { public function message() { $this->load->library('email'); $this->email->from('[email protected]', 'Nate'); $this->email->to('[email protected]'); $this->email->subject('Send email through Code Igniter and msmtp'); $this->email->message('Testing the email class.'); $this->email->send(); } } Email Library Config - application/config/email.php $config = [ 'protocol' => 'sendmail', 'mailpath' => '/usr/bin/msmtp -C /home/quickshiftin/.msmtprc --logfile /var/log/msmtp.log -a sendgrid -t', ]; Sending the email via the CLI php index.php tools message Thoughts on your issue Is /etc/msmtp/.msmtprc readable by your webserver or command line user? Is /usr/bin/msmtp executable by said user? popen may be disabled in your PHP environment Use a debugger to trace through the call to CI_Email::_send_with_sendmail method to determine why it's failing in your case If you configure a log file for msmtp as I have you can look there after trying to send through Code Igniter to catch potential issues
I found an answer that got me most of the way there; you can set the Uri of the request to list(resourceId('Microsoft.Web/sites/config', variables('webSiteName'), 'publishingcredentials'), '2016-08-01').properties.scmUri. You will also need to concatinate the rest of the path (e.g. /api/triggeredwebjobs/{webjobname}/run) The Uri produced by the above code includes the basic auth credentials, and that is parsed at some point and the username and password are taken out of the Uri so they aren't visible in the Azure portal and the authentication is set to 'Basic', and the credentials are set to the extracted values. However, my Uri had query string appended to the end to pass parameters into the webjob. During the deployment process, the query string gets mangled (the question mark is escaped to %3F and if you have any escaped characters in your arguments value, they will get unescaped. I managed to work around this by concatinating strings together to make up the Uri (NOT using the scmUri property), and then setting the authentication property, which is a sibling to the uri property to look like the following "authentication": { "type": "Basic", "username": "[list(resourceId('Microsoft.Web/sites/config', variables('webSiteName'), 'publishingcredentials'), '2016-08-01').properties.publishingUserName]", "password": "[list(resourceId('Microsoft.Web/sites/config', variables('webSiteName'), 'publishingcredentials'), '2016-08-01').properties.publishingPassword]" }
The trope about MyISAM being faster than InnoDB is a holdover from code that was current in the mid-2000's. MyISAM is not faster than InnoDB anymore, for most types of queries. Look at the benchmarks in this blog from 2007: https://www.percona.com/blog/2007/01/08/innodb-vs-myisam-vs-falcon-benchmarks-part-1/ InnoDB has just gotten better, faster, and more reliable since then. MyISAM is not being developed. Update: In MySQL 8.0, even the system tables have been converted to InnoDB. There is clearly an intention to phase out MyISAM. I expect that it will be deprecated and then removed in future versions of MySQL (but I can't say how many years from now that will be). There were a couple of edge cases where MyISAM might be faster, like table-scans. But you really shouldn't be optimizing your database for table-scans. You should be creating the right indexes to avoid table-scans. Update Feb 2018: MyISAM just suffered an additional 40% performance hit from the recent fix for the Meltdown CPU bug, and this affects table-scans. Assuming you are responsible and patch your systems to fix the Meltdown vulnerability, MyISAM is now a performance liability. See current tests of MyISAM performance with the patch: https://mariadb.org/myisam-table-scan-performance-kpti/ But what trumps that is the fact that InnoDB supports ACID behavior, and MyISAM doesn't support any of the four qualities of ACID. See my answer to MyISAM versus InnoDB Failing to support ACID isn't just an academic point. It translates into things like table-locks during updates, and global locks during backups.
For running a script remotely, you have to ensure that PS-Remoting is enabled. Start Windows PowerShell as an administrator by right-clicking the Windows PowerShell shortcut and selecting Run As Administrator. The WinRM service is configured for manual startup by default. You must change the startup type to Automatic and start the service on each computer you want to work with. At the PowerShell prompt, you can verify that the WinRM service is running using the following command: get-service winrm If the service is not running, please make it running by Start-Service winrm To configure Windows PowerShell for remoting, type the following command: Enable-PSRemoting –force To enable authentication, you need to add the remote computer to the list of trusted hosts for the local computer in WinRM. To do so, type: winrm s winrm/config/client '@{TrustedHosts="RemoteComputer"}' Verify that the service on the remote host is running and is accepting requests by running the following command on the remote host: winrm quickconfig This command analyzes and configures the WinRM service. In your case, you have to do all these in ServerB because ServerB has to trust ServerA. After doing these, you can run the below script from ServerA. Certain points I have added in the script itself for your reference. You can change the placeholders according to your requirement. # Embedding the password in the script. # If you do not have a domain creds, then use the username and password directly. $MyDomain='MyDomain' ; $MyClearTextUsername='Username' ; $MyClearTextPassword='Password' ; $MyUsernameDomain=$MyDomain+'\'+$MyClearTextUsername; $SecurePassword=Convertto-SecureString –String $MyClearTextPassword –AsPlainText –force ; $MyCreds=New-object System.Management.Automation.PSCredential $MyUsernameDomain,$SecurePassword ; # Placing the script under a ScriptBlock $MyScriptblock={param($appPoolName,$pathback) # Since you have mentioned that it is working fine locally, I am not checking this part. Assuming its fine. # Defining the functions as Global. So that you can use it anywhere although I am putting in the scriptblock. # Make sure the module is present in the remote system. It should be cause you have already mentioned it is working fine when you are running from that system. Function fnStartApplicationPool([string]$appPoolName) { import-module WebAdministration if((Get-WebAppPoolState $appPoolName).Value -ne 'Started') { Start-WebAppPool -Name $appPoolName } } Function fnStopApplicationPool([string]$appPoolName) { import-module WebAdministration if((Get-WebAppPoolState $appPoolName).Value -ne 'Stopped') { Stop-WebAppPool -Name $appPoolName } } if ($pathback -eq $false) { #Copying Data from Source to Destination copy-Item -Recurse $backupsrc -Destination $backupdes write-host "Backup Successful" #Validating the apppool value import-module WebAdministration if((Get-WebAppPoolState $appPoolName).Value -ne 'Stopped') { #Stop apppool Stop-WebAppPool -Name $appPoolName write-host "AppPool Stopped Successfully" } #Copying Data from Source to Destination #Start apppool Start-WebAppPool -Name $appPoolName write-host "AppPool Started Sucessfully" cd c:\ } } # As you want to Stop the App pool in Server B from Server A. # run the script under server A and provide the Server B creds $result=Invoke-Command -ComputerName 'ServerB' -Credential $MyCreds -ScriptBlock $MyScriptblock -ArgumentList $appPoolName,$pathback ; $result ; If you are satisfied with the answer, feel free to like and accept the answer that will help others also.
Rick Fillion (from 1Password) was kind enough to offer some advice: https://twitter.com/rickfillion/status/794370861646172160 Use LAPolicy.DeviceOwnerAuthentication to test. Here’s the code I am running (Swift 2.3): import Cocoa import LocalAuthentication class ViewController: NSViewController { override func viewDidLoad() { super.viewDidLoad() let myContext = LAContext() let myLocalizedReasonString = "unlock itself" var authError: NSError? = nil if #available(iOS 8.0, OSX 10.12, *) { if myContext.canEvaluatePolicy(LAPolicy.DeviceOwnerAuthentication, error: &authError) { myContext.evaluatePolicy(LAPolicy.DeviceOwnerAuthentication, localizedReason: myLocalizedReasonString) { (success, evaluateError) in if (success) { // User authenticated successfully, take appropriate action print("Success") } else { // User did not authenticate successfully, look at error and take appropriate action print("Failure") } } } else { // Could not evaluate policy; look at authError and present an appropriate message to user print("Evaluation") print(authError) } } else { // Fallback on earlier versions print("Fallback") } // Do any additional setup after loading the view. } }
This type of situation is generally solved by having a middle-man; a single entity that your resource servers trust and that can be used to normalize any possible differences that surface from the fact that users may authenticate with distinct providers. This is sometimes referred to as a federation provider. Auth0 is a good example on this kind of implementation. Disclosure: I'm an Auth0 engineer. Auth0 sits between your app and the identity provider that authenticates your users. Through this level of abstraction, Auth0 keeps your app isolated from any changes to and idiosyncrasies of each provider's implementation. (emphasis is mine) It's not that your resource servers can't technically trust more than one authorization server, it's just that moving that logic out of the individual resource servers into a central location will make it more manageable and decoupled. Also have in mind that authentication and authorization are different things although we are used to seeing them together. If you're going to implement your own authorization server, you should make that the central point that can: handle multiple types of authentication providers provide a normalized view of the user profile to downstream resource servers provide the access tokens that can be used by your client application to make authorized requests to your microservices
Assumptions Microsoft SQL Server 2016 Windows 10 Anniversary Update Windows Containers ASP.NET Core application Add a SQL user to your SQL database. In MS SQL expand the database Right click on 'Security / Logins' Select 'New Login' Create a user name and password. Assign a 'Server Role(s)'...I used sysadmin since I'm just testing Under 'User Mapping' I added my new user to my database and used 'dbo' for schema. Change SQL Authentication to allow SQL Server Authentication Mode Right click on your database, select 'Properties / Security / Server Authentication / SQL Server and Windows Authentication Mode' radio button. Restart the MS SQL service. Update your appsettings.json with your new user name and password Example "ConnectionStrings": { "DefaultConnection": "Server=YourServerName;Database=YourDatabaseName;MultipleActiveResultSets=true;User Id=UserNameYouJustAdded;Password=PassordYouJustCreated" }, Make sure you remove Trusted_Connection=True. Create a Docker file My example Docker file FROM microsoft/dotnet:nanoserver ARG source=. WORKDIR /app EXPOSE 5000 EXPOSE 1433 ENV ASPNETCORE_URLS http://+:5000 COPY $source . Publish Application Running from the same location as the Docker file in an elevated PowerShell dotnet publish docker build bin\Debug\netcoreapp1.0\publish -t aspidserver docker run -it aspidserver cmd I wanted to run the container and see the output as it was running in PowerShell. Once the container was up and running in the container at the command prompt I kicked off my application. dotnet nameOfApplication.dll If everything went to plan one should be up and running.
The AccountManager is good for the following reasons: First is to store multiple account names with different levels of access to the app’s features under a single account type. For example, in a video streaming app, one may have two account names: one with demo access to a limited number of videos and the other with full-month access to all videos. This is not the main reason for using Accounts, however, since you can easily manage that in your app without the need for this fancy-looking Accounts thing… . The other advantage of using Accounts is to get rid of the traditional authorization with username and password each time an authorized feature is requested by the user, because the authentication takes place in the background and the user is asked for their password only in certain condition, which I will get to it later. Using the Accounts feature in android also removes the need for defining one’s own account type. You have probably come across the apps using Google accounts for authorization, which saves the hassle of making a new account and remembering its credentials for the user. Accounts can be added independently through Settings → Accounts Cross-platform user authorization can be easily managed using Accounts. For example, the client can access protected material at the same time in their android device and PC without the need for recurrent logins. From the security point of view, using the same password in every request to the server allows for possible eavesdropping in non-secure connections. Password encryption is not sufficient here to prevent password theft. Finally, an important reason for using the Accounts feature in android is to separate the two parties involved in any business dependent on Accounts, so called authenticator and resource owner, without compromising the client (user)’s credentials. The terms may seem rather vague, but don’t give up until you read the following paragraph … Let me elaborate on the latter with an example of a video streaming app. Company A is the holder of a video streaming business in contract with Company B to provide its certain members with premium streaming services. Company B employs a username and password method for recognizing its user. For Company A to recognize the premium members of B, one way would be to get the list of them from B and utilize similar username/password matching mechanism. This way, the authenticator and resource owner are the same (Company A). Apart from the users obligation to remember a second password, it is very likely that they set the same password as their Company B’s profile for using the services from A. This is obviously not favorable. To allay the above shortcomings, OAuth was introduced. As an open standard for authorization, in the example above, OAuth demands that the authorization be done by Company B (authenticator) by issuing some token called Access Token for the eligible users (third party) and then providing Company A (resource owner) with the token. So no token means no eligibility. I have elaborated more on this and more on AccountManager on my website at here
Laravel 5.3 has changes in the Auth implementation. For me, this way solved it: First, provide a company table in the database that fulfils the criteria to be used for identification. Thus, it needs a name, email, password and remember_token column. Details can be found here. In the config/auth.php change the users model to your company class. 'providers' => [ 'users' => [ 'driver' => 'eloquent', 'model' => App\Company::class, ], Create a Company class in the App folder that extends the Auth, so use: use Illuminate\Foundation\Auth\User as Authenticatable; In the Company class, define fillable and hidden fields. class Company extends Authenticatable { protected $fillable = [ 'name', 'email', 'password', ]; protected $hidden = [ 'password', 'remember_token', ]; } In the RegisterController.php change "use App\User" to use App\Company; Adjust the create and validator function in the RegisterController.php with Company::create protected function create(array $data) { return Company::create([ 'name' => $data['name'], 'email' => $data['email'], 'password' => bcrypt($data['password']), ]); } protected function validator(array $data) { return Validator::make($data, [ 'name' => 'required|max:255', 'email' => 'required|email|max:255|unique:companies', 'password' => 'required|min:6|confirmed', ]); } 'email' => 'required|email|max:255|unique:companies' (table name for Company Model will be companies) Hope this helps!
Is it good practice? No, it is not good practice. From the JWT docs: In authentication, when the user successfully logs in using their credentials, a JSON Web Token will be returned and must be saved locally (typically in local storage, but cookies can be also used), instead of the traditional approach of creating a session in the server and returning a cookie. Reference: https://jwt.io/introduction/https://jwt.io/introduction/ JSESSIONID You need to know that there are multiple types of cookies stored in browser. Many of them can be accessible from JS code, but some of them are httpOnly. This means that browser is able to append them on every request transparently to the JS code (you will not see the cookie in your code). Default implementation of JSESSIONID on server side is the example of httpOnly cookies. There are multiple security reasons for such kind of design - JS malware on your page will not be able to steal session from the client. Headers myHeader.append('SET-COOKIE', 'JSESSIONID=<jsessionid>'); This is not valid way to pass cookies to server. This is correct way to send response to client and set cookies on the client. If you want to pass cookies, you can use: myHeader.append('Cookies', 'JSESSIONID=<jsessionid>'); Still, this will not work. Browser will append its own anyway. That saying, JSESSIONID should be appended automatically to your requests by the browser. If this does not work this way, the JSESSIONID cookie is not set in the browser (Check chrome developer tools, you can view cookies in application tab) or you are using remote server - on different port/server/protocol than your app (then the CORS comes in and ruins your app in this case).
Hashing is one way process unlike Encryption(using a key we can decrypt). Fixed size and Slight changes in data produces entirely new hash value. It is like finger print. Example: MD5,MD6,SHA-1,SHA-2 and so on.. Storing password in database with hash format also not safe by Rainbow tables, Dictionary attacks and Brute force(GPUs can compute billions of hashes per second). To avoid these issue we need to use Salt. A Salt(random number) is used so that the same password does not always generate the same key. i.e. A salt is simply added to make a common password uncommon. A Salt is something we add to our hash to prevent rainbow attacks using rainbow tables which are basically just huge lookup tables that convert hashes to passwords as follows: dffsa32fddf23safd -> passwordscrete f32ksd4343fdsafsj -> stackoverflow So hacker can find this rainbow table, to avoid this problem we have to store hash with the combination of password and salt. hash= hashFunction(passowrd+salt) A Nonce (Number used only once) does not need to be secret or random, but it must not be reused with the same key. This is used to prevent replay attacks (aka playback attack). hashing-vs-encryption
At a basic level, this can be done with the JFrog CLI tools. Unless you want to embed configuration in your .gitlab-ci.yml (I don't) you will first need to run (on your runner): jfrog rt c This will prompt for your Artifactory URL and an API key by default. After entering these items, you'll find ~/.jfrog/jfrog-cli.conf containing JSON like so: { "artifactory": { "url": "http://artifactory.localdomain:8081/artifactory/", "apiKey": "AKCp2V77EgrbwK8NB8z3LdvCkeBPq2axeF3MeVK1GFYhbeN5cfaWf8xJXLKkuqTCs5obpzxzu" } } You can copy this file to the GitLab runner's home directory - in my case, /home/gitlab-runner/.jfrog/jfrog-cli.conf Once that is done, the runner will authenticate with Artifactory using that configuration. There are a bunch of other possibilities for authentication if you don't want to use API keys - check the JFrog CLI docs. Before moving on, make sure the 'jfrog' executable is in a known location, with execute permissions for the gitlab-runner user. From here you can call the utility within your .gitlab-ci.yml - here is a minimal example for a node.js app that will pass the Git tag as the artifact version: stages: - build-package build-package: stage: build-package script: - npm install - tar -czf test-project.tar.gz * - /usr/local/bin/jfrog rt u --build-name="Test Project" --build-number="${CI_BUILD_TAG}" test-project.tar.gz test-repo
WebSecurityConfigurerAdapter appraoch The HttpSecurity class has a method called exceptionHandling which can be used to override the default behavior. The following sample presents how the response message can be customized. @Override protected void configure(HttpSecurity http) throws Exception { http // your custom configuration goes here .exceptionHandling() .authenticationEntryPoint((request, response, e) -> { String json = String.format("{\"message\": \"%s\"}", e.getMessage()); response.setStatus(HttpServletResponse.SC_UNAUTHORIZED); response.setContentType("application/json"); response.setCharacterEncoding("UTF-8"); response.getWriter().write(json); }); } @ControllerAdvice appraoch - Why it doesn't work in this case At first I thought about @ControllerAdvice that catches authentication exceptions for the entire application. import org.springframework.http.HttpStatus; import org.springframework.security.core.AuthenticationException; @ControllerAdvice public class AuthExceptionHandler { @ResponseStatus(HttpStatus.UNAUTHORIZED) @ExceptionHandler(AuthenticationException.class) @ResponseBody public String handleAuthenticationException(AuthenticationException e) { return String.format("{\"message\": \"%s\"}", e.getMessage()); } } In the example above, the JSON is built manually, but you can simply return a POJO which will be mapped into JSON just like from a regular REST controller. Since Spring 4.3 you can also use @RestControllerAdvice, which is a combination of @ControllerAdvice and @ResponseBody. However, this approach doesn't work because the exception is thrown by the AbstractSecurityInterceptor and handled by ExceptionTranslationFilter before any controller is reached.
Edit: User/group based restrictions do not work for static websites hosted in S3 since AWS is not registering your AWS Management Console (path: amazon.com) credentials/cookies for S3 (path: amazonaws.com) and not checking for them either. Workaround: www.s3auth.com - Basic Auth for S3 buckets might do the trick for you but involves a third party. Another solution may be Query String Request Authentication, using an EC2 instance or the Elastic Beanstalk Java SE Static Files Option. We are currently exploring securing our buckets with an Amazon API Gateway as Amazon S3 Proxy. Sidenote: There are some additional things to look out for, which are often not directly pointed out. It is currently not possible in bucket policies to grant or restrict group access, only specific users. Since you also generally don't want to update each bucket policy for each change in your user structure and bucket policies might (unintentionally) interfere with your user policies you may not want to use bucket policies. The user/group based policies only work with the s3:GetBucketLocation and s3:ListAllMyBuckets attached to arn:aws:s3:::* or * (unfortunately no filtering possible here, all bucket names will be visible for users/groups with this policy). IAM Policy Example: (not a S3 Bucket Policy and not working for Static Website Hosting) { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets", "s3:GetBucketLocation" ], "Resource": [ "arn:aws:s3:::*" ] }, { "Effect": "Allow", "Action": "s3:GetObject", "Resource": [ "arn:aws:s3:::YOURBUCKETNAME", "arn:aws:s3:::YOURBUCKETNAME/*" ] } ] } More detailed blog post: "How to Restrict Amazon S3 Bucket Access to a Specific IAM Role"
The correct answer to this question is it depends upon the implementation of the server! Preface: Double-slash is syntactically valid according to RFC 2396, which defines URL path syntax. As amn explains, it therefore implies an empty URI segment. Note however that RFC 2396 only defines the syntax, not semantics of paths, including empty path segments, so it is up to your server to decide the semantics of the empty path. You didn't mention the server software stack you're using, perhaps you're even rolling your own? So please use your imagination as to what the semantics could be! Practically, I would like to point out some everyday semantic-related reasons which mean you should avoid double slashes even though they are syntactically valid: Since empty being valid is somehow not expected by everyone, it can cause bugs. And even though your server technology of today might be compatible with it, either your server technology of tomorrow or the next version of your server technology of today might decide not to support it any more. Example: ASP.NET MVC Web API library throws an error when you try to specify a route template with a double slash. Some servers might interpret // as indicating the root path. This can either be on-purpose, or a bug - and then likely it is a security bug, i.e. a directory traversal vulnerability. Because it is sometimes a bug, and a security bug, some clever server stacks and firewalls will see the substring '//', deduce you are possibly making an attempt at exploiting such a bug, and therefore they will return 403 Forbidden or 400 Bad Request etc, and refuse to actually do any further processing of the URI.
ACL in Apache Curator are for access control. Therefore, ZooKeeper do not provide any authentication mechanism like, clients who don't have correct password cannot connect to ZooKeeper or cannot create ZNodes. What it can do is, preventing unauthorized clients from accessing particular Znode/ZNodes. In order to do that, you have to setup CuratorFramework instance as I have described below. Remember, this will guarantee that, a ZNode create with a given ACL, can be again accessed by the same client or by a client presenting the same authentication information. First you should build the CuratorFramework instane as follows. Here, the connectString means a comma separated list of ip and port combinations of the zookeeper servers in your ensemble. CuratorFrameworkFactory.Builder builder = CuratorFrameworkFactory.builder() .connectString(connectString) .retryPolicy(new ExponentialBackoffRetry(retryInitialWaitMs, maxRetryCount)) .connectionTimeoutMs(connectionTimeoutMs) .sessionTimeoutMs(sessionTimeoutMs); /* * If authorization information is available, those will be added to the client. NOTE: These auth info are * for access control, therefore no authentication will happen when the client is being started. These * info will only be required whenever a client is accessing an already create ZNode. For another client of * another node to make use of a ZNode created by this node, it should also provide the same auth info. */ if (zkUsername != null && zkPassword != null) { String authenticationString = zkUsername + ":" + zkPassword; builder.authorization("digest", authenticationString.getBytes()) .aclProvider(new ACLProvider() { @Override public List<ACL> getDefaultAcl() { return ZooDefs.Ids.CREATOR_ALL_ACL; } @Override public List<ACL> getAclForPath(String path) { return ZooDefs.Ids.CREATOR_ALL_ACL; } }); } CuratorFramework client = builder.build(); Now you have to start it. client.start(); Creating a path. client.create().withMode(CreateMode.PERSISTENT).forPath("/your/ZNode/path"); Here, the CreateMode specify what type of a node you want to create. Available types are PERSISTENT,EPHEMERAL,EPHEMERAL_SEQUENTIAL,PERSISTENT_SEQUENTIAL,CONTAINER. Java Docs If you are not sure whether the path up to /your/ZNode already exists, you can create them as well. client.create().creatingParentsIfNeeded().withMode(CreateMode.PERSISTENT).forPath("/your/ZNode/path"); Set Data You can either set data when you are creating the ZNode or later. If you are setting data at the creation time, pass the data as a byte array as the second parameter to the forPath() method. client.create().withMode(CreateMode.PERSISTENT).forPath("/your/ZNode/path","your data as String".getBytes()); If you are doing it later, (data should be given as a byte array) client.setData().forPath("/your/ZNode/path",data); Finally I don't understand what you mean by get this path. Apache Curator is a java client (more than that with Curator Recipes) which use Apache Zookeeper in the background and hides edge cases and complexities of Zookeeper. In Zookeeper, they use the concept of ZNodes to store data. You can consider it as the Linux directory structure. All ZNodePaths should start with / (root) and you can go on specifying directory like ZNodePaths as you like. Ex: /someName/another/test/sample. As shown in the above diagram, ZNode are organized in a tree structure. Every ZNode can store up to 1MB of data. Therefore, if you want to retrieve data stored in a ZNode, you need to know the path to that ZNode. (Just like you should know the table and column of a database in order to retrive data). If you want to retrive data in a given path, client.getData().forPath("/path/to/ZNode"); That's all you have to know when you want to work with Curator. One more thing ACL in Apache Curator are for access control. That is, if you set ACLProvider as follows, new ACLProvider() { @Override public List<ACL> getDefaultAcl () { return ZooDefs.Ids.CREATOR_ALL_ACL; } @Override public List<ACL> getAclForPath (String path){ return ZooDefs.Ids.CREATOR_ALL_ACL; } } only the client with the credentials identical to the creator will be given access to the corresponding ZNode later on. Autherization details are set as follows (See the client building example). There are other modes of ACL availble, like OPEN_ACL_UNSAFE which do not do any access control if you set it as the ACLProvider. authorization("digest", authorizationString.getBytes()) they will be used later to control access to a given ZNode. In short, if you want to prevent others from interfering your ZNodes, you can set the ACLProvider to return CREATOR_ALL_ACL and set the authorization to digest as shown above. Only the CuratorFramework instances using the same authorization string ("username:password") will be able to access those ZNodes. But it will not prevent others from creating ZNodes in paths which are not interfering with yours. Hope you found what you want :-)
I actually consider using the "protected sections" feature in App.Config or Web.Config to be LESS secure than storing the password in your code. Anyone with server access can decrypt that section of the config just as quick as you encrypted it by running the decrypt command described in the article everyone keeps quoting: aspnet_regiis -pd "connectionStrings" -app "/SampleApplication" https://msdn.microsoft.com/en-us/library/zhhddkxy.aspx#Anchor_1 So this feature of ASP.Net only adds security in the case that a hacker somehow had access to your web.config but not your entire server (happened in 2010 as @djteller mentioned in the oracle padding attack comment). But if they do have server access, you're exposed in one cmd call. They don't even have to install ildasm.exe. However, storing actual passwords in your code is a maintenance nightmare. So one thing I've seen done is storing an encrypted password in your web.config and storing the encryption key in your code. This accomplishes the goal of hiding passwords from casual browsing while still being maintainable. In this case a hacker has to at least decompile your code, find your key, and then figure out what encryption algorithm you're using. Not impossible, but certainly harder than running "aspnet_regiis -pd...". Meanwhile I am also looking for better answers to this six year old question...
I managed to Create dynamic menu based on user access right after the user logs in. I think I was not able to articulate the requirement properly in my original question. Yesterday, while searching "how to make 2 components communicate with each other" I found this on the angular website (see link below): https://angular.io/docs/ts/latest/api/core/index/EventEmitter-class.html We can achieve this by using a Global EventEmitter. Here is how I implemented it in my code: GlobalEventManager: import { Injectable, EventEmitter } from "@angular/core"; @Injectable() export class GlobalEventsManager { public showNavBar: EventEmitter<any> = new EventEmitter(); public hideNavBar: EventEmitter<any> = new EventEmitter(); } The below link will help you to understand how to implement auth guard (restricting a user to enter without login). http://jasonwatmore.com/post/2016/08/16/angular-2-jwt-authentication-example-tutorial Auth.Guard.ts: import { Injectable } from '@angular/core'; import { Router, CanActivate, ActivatedRouteSnapshot, RouterStateSnapshot } from '@angular/router'; import { GlobalEventsManager } from "../_common/gobal-events-manager"; @Injectable() export class AuthGuard implements CanActivate { constructor(private router: Router, private globalEventsManager: GlobalEventsManager) { } canActivate() { if (localStorage.getItem('currentUser')) { this.globalEventsManager.showNavBar.emit(true); return true; } else { // not logged in so redirect to login page this.router.navigate(['/login']); this.globalEventsManager.hideNavBar.emit(true); return; } } } Model used in menu.component.ts Features: export class Features { Description: string; RoutePath: string; } menu.component.ts: import { Component, OnInit } from '@angular/core'; import { Router } from '@angular/router'; import { Features } from '../_models/features'; import { Http, Headers, RequestOptions, Response } from '@angular/http'; import { GlobalEventsManager } from "../_common/gobal-events-manager"; @Component({ selector: 'nav', templateUrl: './menu.component.html' }) export class MenuComponent { showNavBar: boolean = false; featureList: Features[] = []; private headers = new Headers({ 'Content-Type': 'application/json' }); constructor(private http: Http, private router: Router, private globalEventsManager: GlobalEventsManager) { this.globalEventsManager.showNavBar.subscribe((mode: any) => { this.showNavBar = mode; if (this.showNavBar = true) { <!-- the below function expects user id, here I have given as 1 --> this.getFeatureListByLoggedInUser(1) .then(list => { this.featureList = list; }); } }); this.globalEventsManager.hideNavBar.subscribe((mode: any) => { this.showNavBar = false; this.featureList = []; }); } private getFeatureListByLoggedInUser(userID: number): Promise<Features[]> { return this.http.get(your api url + '/Feature/GetFeatureListByUserID?userID=' + userID) .toPromise() .then(response => response.json() as Features[]) .catch(this.handleError); } private handleError(error: any): Promise<any> { console.error('An error occurred', error); // for demo purposes only return Promise.reject(error.message || error); } } Menu.Component.html: <div id="navbar" *ngIf="showNavBar" class="navbar-collapse collapse navbar-collapse-custom"> <ul class="nav navbar-nav nav_menu full-width"> <li *ngFor="let feature of featureList" class="nav_menu" routerLinkActive="active"><a class="nav-item nav-link" [routerLink]="[feature.routepath]" routerLinkActive="active">{{feature.description}}</a></li> </ul> </div> App.Component.ts: <!-- menu container --> <nav> </nav> <!-- main app container --> <div class="container-fluid body-content-custom"> <div class="col-lg-12 col-md-12 col-sm-12 col-xs-12 no-padding"> <router-outlet></router-outlet> </div> </div> <footer class="footer">   </footer> In the last, we need to register the providers of menu and global event manager in app.module.ts app.module.ts /// <reference path="reset-password/reset-password.component.ts" /> /// <reference path="reset-password/reset-password.component.ts" /> import './rxjs-extensions'; import { NgModule, ErrorHandler } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { FormsModule, ReactiveFormsModule } from '@angular/forms'; import { HttpModule, XHRBackend } from '@angular/http'; import { AppRoutingModule } from './app-routing.module'; import { AppComponent } from './app.component'; import { AuthGuard } from './_guards/auth.guard'; import { ContentHeaders } from './_common/headers'; import { GlobalEventsManager } from "./_common/gobal-events-manager"; import { MenuComponent } from "./menu/menu.component"; @NgModule({ imports: [ BrowserModule, FormsModule, HttpModule, AppRoutingModule, ReactiveFormsModule ], declarations: [ AppComponent, MenuComponent ], providers: [ AuthGuard, ContentHeaders, GlobalEventsManager ], bootstrap: [AppComponent] }) export class AppModule { } I hope this will help!
I decided to go with option 2 in order to minimize the number of calls to the API. I then created a base controller class with a HttpClient factory method, which also checks if the JWT is about to expire: public HttpClient GetHttpClient(string baseAdress) { var client = new HttpClient(); client.BaseAddress = new Uri(baseAdress); client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json")); string token; if (Session["access_token"] != null) { var jwthandler = new JwtSecurityTokenHandler(); var jwttoken = jwthandler.ReadToken(Session["access_token"] as string); var expDate = jwttoken.ValidTo; if (expDate < DateTime.UtcNow.AddMinutes(1)) token = GetAccessToken().Result; else token = Session["access_token"] as string; } else { token = GetAccessToken().Result; } client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", token); Session["access_token"] = token; return client; }
My Email config is below: <?php defined('BASEPATH') OR exit('No direct script access allowed.'); // Mail engine switcher: 'CodeIgniter' or 'PHPMailer' $config['useragent'] = 'PHPMailer'; // 'mail', 'sendmail', or 'smtp' $config['protocol'] = 'smtp'; $config['mailpath'] = '/project-folder/sendmail'; $config['smtp_host'] = 'smtp.gmail.com'; $config['smtp_user'] = 'Your Gmail Email'; $config['smtp_pass'] = 'Your Gmail Pass'; $config['smtp_port'] = 587; // (in seconds) $config['smtp_timeout'] = 30; // '' or 'tls' or 'ssl' $config['smtp_crypto'] = 'tls'; // PHPMailer's SMTP debug info level: 0 = off, 1 = commands, 2 = commands and data, 3 = as 2 plus connection status, 4 = low level data output. $config['smtp_debug'] = 0; // Whether to enable TLS encryption automatically if a server supports it, even if `smtp_crypto` is not set to 'tls'. $config['smtp_auto_tls'] = false; // SMTP connection options, an array passed to the function stream_context_create() when connecting via SMTP. $config['smtp_conn_options'] = array(); $config['wordwrap'] = true; $config['wrapchars'] = 76; // 'text' or 'html' $config['mailtype'] = 'html'; // 'UTF-8', 'ISO-8859-15', ...; NULL (preferable) means config_item('charset'), i.e. the character set of the site. $config['charset'] = null; $config['validate'] = true; // 1, 2, 3, 4, 5; on PHPMailer useragent NULL is a possible option, it means that X-priority header is not set at all $config['priority'] = 3; // "\r\n" or "\n" or "\r" $config['crlf'] = "\n"; // "\r\n" or "\n" or "\r" $config['newline'] = "\n"; $config['bcc_batch_mode'] = false; $config['bcc_batch_size'] = 200; // The body encoding. For CodeIgniter: '8bit' or '7bit'. For PHPMailer: '8bit', '7bit', 'binary', 'base64', or 'quoted-printable'. $config['encoding'] = '8bit';
Below is the code written in C++ that does exactly what you are looking for. If anything is not working you can tell me in comments. I have use the find() function in order to minimize the length of code and for better readability. I have primarily modified the for() loop that is encrypting the message (as mentioned by you also). EDIT - Don't look for spaces only while encrypting. Anything other then alphabet should be printed as it is in the encrypted message. I have used return(-1) to do the same and checked it in the if condition in the encryption loop. char alphabet[26]{ 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z'}; #include <iostream> #include <string> using namespace std; int find(char character) { for (int i = 0; i < 26; i++) { if (character == alphabet[i]) { return (i); } } return (-1); } int main() { string key; int keySize; char keyChar[5]; bool keyLoop = true; int keyInteger[5]; while (keyLoop) { cout << "enter in the 5 letter key: "; cin >> key; keySize = key.size(); if (keySize < 5 || keySize > 5) { cout << "Invalid key." << endl; } else { for (int i = 0; i < 5; i++) { keyChar[i] = key[i]; } keyLoop = false; } } for (int x = 0; x < 5; x++) { keyInteger[x] = find(keyChar[x]); } string secretMessage; cin.ignore(); cout << "Enter your secret message: "; getline(cin, secretMessage); int secretMessageSize = secretMessage.size(); string newMessage = secretMessage; int temp; int x = 0; for (int i = 0; i < secretMessageSize; i++) { if (x == 5) x = 0; if (find(secretMessage[i]) == -1) newMessage[i] = secretMessage[i]; else { temp = (find(secretMessage[i]) + keyInteger[x])%26; newMessage[i] = alphabet[temp]; x++; } } for (int i = 0; i < secretMessageSize; i++) { cout << newMessage[i]; } cout << endl; }
I am doing some similar job like you did. static string GetAspAuthToken(string authSiteEndPoint, string userName, string password) { var identityProviderEndpoint = new EndpointAddress(new Uri(authSiteEndPoint + "/wstrust/issue/usernamemixed")); var identityProviderBinding = new WS2007HttpBinding(SecurityMode.TransportWithMessageCredential); identityProviderBinding.Security.Message.EstablishSecurityContext = false; identityProviderBinding.Security.Message.ClientCredentialType = MessageCredentialType.UserName; identityProviderBinding.Security.Transport.ClientCredentialType = HttpClientCredentialType.None; var trustChannelFactory = new WSTrustChannelFactory(identityProviderBinding, identityProviderEndpoint) { TrustVersion = TrustVersion.WSTrust13, }; //This line is only if we're using self-signed certs in the installation trustChannelFactory.Credentials.ServiceCertificate.SslCertificateAuthentication = new X509ServiceCertificateAuthentication() { CertificateValidationMode = X509CertificateValidationMode.None }; trustChannelFactory.Credentials.SupportInteractive = false; trustChannelFactory.Credentials.UserName.UserName = userName; trustChannelFactory.Credentials.UserName.Password = password; var channel = trustChannelFactory.CreateChannel(); var rst = new RequestSecurityToken(RequestTypes.Issue) { AppliesTo = new EndpointReference("http://azureservices/TenantSite"), TokenType = "urn:ietf:params:oauth:token-type:jwt", KeyType = KeyTypes.Bearer, }; RequestSecurityTokenResponse rstr = null; SecurityToken token = null; token = channel.Issue(rst, out rstr); var tokenString = (token as GenericXmlSecurityToken).TokenXml.InnerText; var jwtString = Encoding.UTF8.GetString(Convert.FromBase64String(tokenString)); return jwtString; } Parameter "authSiteEndPoint" is your Tenant Authentication site url. default port is 30071. You can find some resource here: https://msdn.microsoft.com/en-us/library/dn479258.aspx The sample program "SampleAuthApplication" can solve your question.
The way in which the tutorial you mentioned, as well as the Fingerprint Dialog Sample provided by Google, handles authentication is by assuming that the user is authentic when onAuthenticationSucceeded() is called. The Google sample takes this a step further by checking if the Cipher provided by the can encrypt arbitrary data: /** * Proceed the purchase operation * * @param withFingerprint {@code true} if the purchase was made by using a fingerprint * @param cryptoObject the Crypto object */ public void onPurchased(boolean withFingerprint, @Nullable FingerprintManager.CryptoObject cryptoObject) { if (withFingerprint) { // If the user has authenticated with fingerprint, verify that using cryptography and // then show the confirmation message. assert cryptoObject != null; tryEncrypt(cryptoObject.getCipher()); } else { // Authentication happened with backup password. Just show the confirmation message. showConfirmation(null); } } /** * Tries to encrypt some data with the generated key in {@link #createKey} which is * only works if the user has just authenticated via fingerprint. */ private void tryEncrypt(Cipher cipher) { try { byte[] encrypted = cipher.doFinal(SECRET_MESSAGE.getBytes()); showConfirmation(encrypted); } catch (BadPaddingException | IllegalBlockSizeException e) { Toast.makeText(this, "Failed to encrypt the data with the generated key. " + "Retry the purchase", Toast.LENGTH_LONG).show(); Log.e(TAG, "Failed to encrypt the data with the generated key." + e.getMessage()); } } This is a valid form of authentication, but if you need to actually store and retrieve a secret (in your case a pin), it is not sufficient. Instead, you can use asymmetric cryptography to encrypt your secret, then decrypt it upon onAuthenticationSucceeded(). This is similar to how authentication is handled in the Asymmetric Fingerprint Dialog Sample, although without a back end server.
Yes, there is a way to do it, but too complex to do it. Each time you need/want to compact it you need to do some steps carefully. (Maybe this is not really need it, try first without this) Blank with zeros all free space inside 'clear' mounted partitions, so free space is zeroed in 'clear', it will not be zeroed in 'encrypted' view point, since encryption will encrypt such zeros. Shutdown the machine and boot with a LiveCD iso that let you mount the virtual hdd you are using and a new 'dynamic' and 'empty' one. Set the partition scheme and encription identically on the new one, but ensure encription will not do the 'fill' part, so it does not write all sectors... this is the top most important part... this way the new virtual disk is smalll in size, but encypted by you LUKs, etc. At this point, only scheme and encryption is on the new 'small' one, now is time to mount both enctypted... the old and the new, so they can see in 'plain' at the same time. Again this is very important, clone form the old 'plain' to the new 'plain' only sectors that have data (most tools to clone partitions does that). As i say... the top most important thing (to get a smaller virtual HDD) is: Create a new virtual dynamic disk empty Partition it and Encrypt it without writting all sectors; so omit the dd with random data prior to do the encryption or else the dynamic file will grow to max), also omit the fill empty space, that again will grow the virtual disk to max Clone the partitions from the plain view (mounted and de-crypted on fly), so the clone tool will only write data areas of files, etc, but not free space. There is a small part that will not be able to be reduced... files inside encrypted partitions that have full clusters fill with zeros (hope you do not have any of thoose)... the cause is that such space (when no encryption, as is all zeros, the normal compat see the full cluster is zero so it does not need such space; but when it is encrypted, such cluster is not all zero inside the real virtaul disk file, so the compact method can not reduce it). The idea behind all is: When encryption is on... to get the smallest virtual disk size, start with a dynamic and empty one and write as less as possible clusters on it when cloning the previous one. As said, it is too much work... and time to time, each write occurs it will start growing and growing again. My best personal recomendation is, get a 'BIG' and 'FAST' disk and use a fixed virtual disk... if i read well and your disk is only 20GiB... you gain in speed a lot for having it fixed and not dynamic and will not get worried about 'fragmentation' etc. Remember if you use a USB for it, get one able to write at 30MiB/s (if only have USB 2.0 ports), if you are lucky (like me) and have at least one USB 3 port (better if it is a USB 3.1 Gen 2 Type C) seach for a 2.5 inch HDD 500GiB Sata III (with write speed greater than 100MiB/s, it is really cheap, less than 25 euros) and a Sata III to USB 3.1 Gen 2 Type C enclosure (also cheap, some are under 15 euros)... and avoid having to 'reduce', 'clone' etc. I have 10 virtual machines on a 500GiB (with more than 50% free space), each is 20GiB in size (with Windows system inside them taking near 16BiG) and VeraCrypt encryption... so i am on the quite near case to you... i opted to use a USB 3.1 gen 2 Type C enclosure to hold all the fixed size VDI files... my experience is that encrypted fixed size fly if compared to non encrypted dynamic size. Of course, ensure you do the needed test (when encryption takes place), i mean... test virtual HDD speed with no encryption, then test encryption algorithms on ram... and choose a method that is faster than 1.6 times the speed of the disk... so encription will not be a bottle neck... else you can have a really bad speed caused by encryption. Also think on this, how much cores you show to the guest? that will make encryption speed very different... but also think the worst case... how much CPU will use the non encryption threads on that guest? Just as an example... if inside the guest you are doing LZMA2 compression (or video transcoding H.264 for example) etc... the free CPU for encryption is very low... so encryption will slow down things a lot... sample cases also do much I/O to disks, so encrypt/decrypt a lot per second is needed. Maybe a better aproach... would be... encrypt the 'container' not the 'system'... in other words... encrypt where the VDI files are stored, not the whole guest system... create a container per VDI if want different phrase passwords, etc. That way the VDI can also be dynamic and be compacted, etc. Of course, i would be of more help if you told what encryption scheme (without details) are using. This makes a really great difference in possible answers: Are you encrypting system partition with any tool that runs on guest? Then use the 'clone' only used clusters trick Are you encrypting but setting the VDI encryption property on? Maybe VirtualBox console will help to compat them Are you encrypting the container where VDI is stored? I am quite sure this is not your case, since in such case compact can be done as normal, VDI is not encrypted at all, neither anything iside it. I talk about VDI... same applies to the rest formats, VHD, VHDx, etc. Remember... if encription is done on guest and still want to reduce (compact) the virtua hdd file... start with a new dynamic one, put partition scheme, encryption but without filling all the disk... at this point virtual disk file size must not be great, just a few megabytes... then clone form the old to the new one all used clusters, but not the not used ones. Advise: Prepare to repeat the 'compact' by 'cloning' a few times per 100 hours of intense use of the guest... if gain is less than 50% it does not compensate the effort... then the best can be done is use fixed size. Special note: With Fixed size the access speed is much more than with dynamic size... having a dynamic size with 100% size as if it where fixed is a big lost in speed... how much? you must do the test in your machine, depends a lot on CPU, I/O speed (input/ouptup operations per second) of storage you have and also on transfer speed (MiB/s), and other factors... so best do some test. Since you are talking about 20GiB... better do the test of fixed size... i am quite sure you will enjoy it a lot. Other thing would be talking about 500GiB system partition with only 10% fill... since space gain could be 450GiB, it is wellcome to do the clone method to compact it, that is why i say how to do such... for such people and your you, and for any one. P.D.: If someone does not know how to do something, that does not mean it is not possible, and if someone say something is not possible, better for that person explain the demostration or be prepeared to be called an idiot; technology improves a lot time to time, knowledge a lot more.
Or using BAT/VBS : for the image dimension use the value : 31. example : GetMediaInfo.bat "Path_to_the_folder" "Image_name" 31 ::GetMediaInfo.bat ::By SachaDee - 2016 ::Usage ::GetMediaInfo.bat "Folder" "File" "Value of the Info to GET" ::Possible Value Example : :: 27 = Media Duration for video or music files :: 28 = Bits Rate in Kbs/s :: 31 = Dimensions of an image ::Output ::Information du media @echo off If not exist "#.vbs" call:construct For /f "delims=" %%a in ('cscript //nologo #.vbs "%~1" "%~2" "%~3"') do set $MediaInfo=%%a echo %$MediaInfo% exit/b :construct (echo.dim objShell&echo.dim objFolder&echo.dim objFolderItem&echo.set objShell = CreateObject("shell.application"^)&echo.set objFolder = objShell.NameSpace(wscript.arguments(0^)^)&echo.set objFolderItem = objFolder.ParseName(wscript.arguments(1^)^)&echo.dim objInfo&echo.objInfo = objFolder.GetDetailsOf(objFolderItem, wscript.arguments(2^)^)&echo.wscript.echo objinfo)>#.vbs List of possible value (depend of the file type) : Name - 0 Size - 1 Item type - 2 Date modified - 3 Date created - 4 Date accessed - 5 Attributes - 6 Offline status - 7 Offline availability - 8 Perceived type - 9 Owner - 10 Kind - 11 Date taken - 12 Contributing artists - 13 Album - 14 Year - 15 Genre - 16 Conductors - 17 Tags - 18 Rating - 19 Authors - 20 Title - 21 Subject - 22 Categories - 23 Comments - 24 Copyright - 25 Length - 27 Bit rate - 28 Protected - 29 Camera model - 30 Dimensions - 31 Camera maker - 32 Company - 33 File description - 34 Program name - 35 Duration - 36 Is online - 37 Is recurring - 38 Location - 39 Optional attendee addresses - 40 Optional attendees - 41 Organizer address - 42 Organizer name - 43 Reminder time - 44 Required attendee addresses - 45 Required attendees - 46 Resources - 47 Meeting status - 48 Free/busy status - 49 Total size - 50 Account name - 51 Task status - 52 Computer - 53 Anniversary - 54 Assistant's name - 55 Assistant's phone - 56 Birthday - 57 Business address - 58 Business city - 59 Business P.O. box - 60 Business postal code - 61 Business state or province - 62 Business street - 63 Business fax - 64 Business home page - 65 Business phone - 66 Callback number - 67 Car phone - 68 Children - 69 Company main phone - 70 Department - 71 E-mail address - 72 E-mail2 - 73 E-mail3 - 74 E-mail list - 75 E-mail display name - 76 File as - 77 First name - 78 Full name - 79 Gender - 80 Given name - 81 Hobbies - 82 Home address - 83 Home city - 84 Home country/region - 85 Home P.O. box - 86 Home postal code - 87 Home state or province - 88 Home street - 89 Home fax - 90 Home phone - 91 IM addresses - 92 Initials - 93 Job title - 94 Label - 95 Last name - 96 Mailing address - 97 Middle name - 98 Cell phone - 99 Cell phone - 100 Nickname - 101 Office location - 102 Other address - 103 Other city - 104 Other country/region - 105 Other P.O. box - 106 Other postal code - 107 Other state or province - 108 Other street - 109 Pager - 110 Personal title - 111 City - 112 Country/region - 113 P.O. box - 114 Postal code - 115 State or province - 116 Street - 117 Primary e-mail - 118 Primary phone - 119 Profession - 120 Spouse/Partner - 121 Suffix - 122 TTY/TTD phone - 123 Telex - 124 Webpage - 125 Content status - 126 Content type - 127 Date acquired - 128 Date archived - 129 Date completed - 130 Device category - 131 Connected - 132 Discovery method - 133 Friendly name - 134 Local computer - 135 Manufacturer - 136 Model - 137 Paired - 138 Classification - 139 Status - 140 Client ID - 141 Contributors - 142 Content created - 143 Last printed - 144 Date last saved - 145 Division - 146 Document ID - 147 Pages - 148 Slides - 149 Total editing time - 150 Word count - 151 Due date - 152 End date - 153 File count - 154 Filename - 155 File version - 156 Flag color - 157 Flag status - 158 Space free - 159 Bit depth - 160 Horizontal resolution - 161 Width - 162 Vertical resolution - 163 Height - 164 Importance - 165 Is attachment - 166 Is deleted - 167 Encryption status - 168 Has flag - 169 Is completed - 170 Incomplete - 171 Read status - 172 Shared - 173 Creators - 174 Date - 175 Folder name - 176 Folder path - 177 Folder - 178 Participants - 179 Path - 180 By location - 181 Type - 182 Contact names - 183 Entry type - 184 Language - 185 Date visited - 186 Description - 187 Link status - 188 Link target - 189 URL - 190 Media created - 191 Date released - 192 Encoded by - 193 Producers - 194 Publisher - 195 Subtitle - 196 User web URL - 197 Writers - 198 Attachments - 199 Bcc addresses - 200 Bcc - 201 Cc addresses - 202 Cc - 203 Conversation ID - 204 Date received - 205 Date sent - 206 From addresses - 207 From - 208 Has attachments - 209 Sender address - 210 Sender name - 211 Store - 212 To addresses - 213 To do title - 214 To - 215 Mileage - 216 Album artist - 217 Album ID - 218 Beats-per-minute - 219 Composers - 220 Initial key - 221 Part of a compilation - 222 Mood - 223 Part of set - 224 Period - 225 Color - 226 Parental rating - 227 Parental rating reason - 228 Space used - 229 EXIF version - 230 Event - 231 Exposure bias - 232 Exposure program - 233 Exposure time - 234 F-stop - 235 Flash mode - 236 Focal length - 237 35mm focal length - 238 ISO speed - 239 Lens maker - 240 Lens model - 241 Light source - 242 Max aperture - 243 Metering mode - 244 Orientation - 245 People - 246 Program mode - 247 Saturation - 248 Subject distance - 249 White balance - 250 Priority - 251 Project - 252 Channel number - 253 Episode name - 254 Closed captioning - 255 Rerun - 256 SAP - 257 Broadcast date - 258 Program description - 259 Recording time - 260 Station call sign - 261 Station name - 262 Summary - 263 Snippets - 264 Auto summary - 265 Search ranking - 266 Sensitivity - 267 Shared with - 268 Sharing status - 269 Product name - 270 Product version - 271 Support link - 272 Source - 273 Start date - 274 Billing information - 275 Complete - 276 Task owner - 277 Total file size - 278 Legal trademarks - 279 Video compression - 280 Directors - 281 Data rate - 282 Frame height - 283 Frame rate - 284 Frame width - 285 Total bitrate - 286
A lot of code is missing so only a guess can be made: The node.js code is prefixing the IV to the encrypted data which is a common method and you are not removing the 16-bytes of IV prior to decryption. If this is the case split off the IV prefix and use it as the IV for decryption. Example from deprecated documentation section: AES encryption in CBC mode with a random IV (Swift 3+) The iv is prefixed to the encrypted data aesCBC128Encrypt will create a random IV and prefixed to the encrypted code. aesCBC128Decrypt will use the prefixed IV during decryption. Inputs are the data and key are Data objects. If an encoded form such as Base64 if required convert to and/or from in the calling method. The key should be exactly 128-bits (16-bytes), 192-bits (24-bytes) or 256-bits (32-bytes) in length. If another key size is used an error will be thrown. PKCS#7 padding is set by default. This example requires Common Crypto It is necessary to have a bridging header to the project: #import <CommonCrypto/CommonCrypto.h> Add the Security.framework to the project. This is example, not production code. enum AESError: Error { case KeyError((String, Int)) case IVError((String, Int)) case CryptorError((String, Int)) } // The iv is prefixed to the encrypted data func aesCBCEncrypt(data:Data, keyData:Data) throws -> Data { let keyLength = keyData.count let validKeyLengths = [kCCKeySizeAES128, kCCKeySizeAES192, kCCKeySizeAES256] if (validKeyLengths.contains(keyLength) == false) { throw AESError.KeyError(("Invalid key length", keyLength)) } let ivSize = kCCBlockSizeAES128; let cryptLength = size_t(ivSize + data.count + kCCBlockSizeAES128) var cryptData = Data(count:cryptLength) let status = cryptData.withUnsafeMutableBytes {ivBytes in SecRandomCopyBytes(kSecRandomDefault, kCCBlockSizeAES128, ivBytes) } if (status != 0) { throw AESError.IVError(("IV generation failed", Int(status))) } var numBytesEncrypted :size_t = 0 let options = CCOptions(kCCOptionPKCS7Padding) let cryptStatus = cryptData.withUnsafeMutableBytes {cryptBytes in data.withUnsafeBytes {dataBytes in keyData.withUnsafeBytes {keyBytes in CCCrypt(CCOperation(kCCEncrypt), CCAlgorithm(kCCAlgorithmAES), options, keyBytes, keyLength, cryptBytes, dataBytes, data.count, cryptBytes+kCCBlockSizeAES128, cryptLength, &numBytesEncrypted) } } } if UInt32(cryptStatus) == UInt32(kCCSuccess) { cryptData.count = numBytesEncrypted + ivSize } else { throw AESError.CryptorError(("Encryption failed", Int(cryptStatus))) } return cryptData; } // The iv is prefixed to the encrypted data func aesCBCDecrypt(data:Data, keyData:Data) throws -> Data? { let keyLength = keyData.count let validKeyLengths = [kCCKeySizeAES128, kCCKeySizeAES192, kCCKeySizeAES256] if (validKeyLengths.contains(keyLength) == false) { throw AESError.KeyError(("Invalid key length", keyLength)) } let ivSize = kCCBlockSizeAES128; let clearLength = size_t(data.count - ivSize) var clearData = Data(count:clearLength) var numBytesDecrypted :size_t = 0 let options = CCOptions(kCCOptionPKCS7Padding) let cryptStatus = clearData.withUnsafeMutableBytes {cryptBytes in data.withUnsafeBytes {dataBytes in keyData.withUnsafeBytes {keyBytes in CCCrypt(CCOperation(kCCDecrypt), CCAlgorithm(kCCAlgorithmAES128), options, keyBytes, keyLength, dataBytes, dataBytes+kCCBlockSizeAES128, clearLength, cryptBytes, clearLength, &numBytesDecrypted) } } } if UInt32(cryptStatus) == UInt32(kCCSuccess) { clearData.count = numBytesDecrypted } else { throw AESError.CryptorError(("Decryption failed", Int(cryptStatus))) } return clearData; } Example usage: let clearData = "clearData0123456".data(using:String.Encoding.utf8)! let keyData = "keyData890123456".data(using:String.Encoding.utf8)! print("clearData: \(clearData as NSData)") print("keyData: \(keyData as NSData)") var cryptData :Data? do { cryptData = try aesCBCEncrypt(data:clearData, keyData:keyData) print("cryptData: \(cryptData! as NSData)") } catch (let status) { print("Error aesCBCEncrypt: \(status)") } let decryptData :Data? do { let decryptData = try aesCBCDecrypt(data:cryptData!, keyData:keyData) print("decryptData: \(decryptData! as NSData)") } catch (let status) { print("Error aesCBCDecrypt: \(status)") } Example Output: clearData: <636c6561 72446174 61303132 33343536> keyData: <6b657944 61746138 39303132 33343536> cryptData: <92c57393 f454d959 5a4d158f 6e1cd3e7 77986ee9 b2970f49 2bafcf1a 8ee9d51a bde49c31 d7780256 71837a61 60fa4be0> decryptData: <636c6561 72446174 61303132 33343536> Notes: One typical problem with CBC mode example code is that it leaves the creation and sharing of the random IV to the user. This example includes generation of the IV, prefixed the encrypted data and uses the prefixed IV during decryption. This frees the casual user from the details that are necessary for CBC mode. For security the encrypted data also should have authentication, this example code does not provide that in order to be small and allow better interoperability for other platforms. Also missing is key derivation of the key from a password, it is suggested that PBKDF2 be used is text passwords are used as keying material. For robust production ready multi-platform encryption code see RNCryptor.
Well if you are using the softlayer-ruby client it only orders fast servers (I mean it orders servers from the package 200 and hourly servers). These servers do not have available the OS "WIN_2008-STD-R2-SP1_64" that's why you see the error. Now you can see that the OS is available in the portal because likely you selected a server which uses another package, the easy way to know if you selected in the portal a server from another package is verifying if the server has the option hourly, if it does not have the option the server belongs to another package. If you want to order the server using the API you need to use the placeOrder method see this article to understand how to use the method: http://sldn.softlayer.com/blog/bpotter/going-further-softlayer-api-python-client-part-3 here a example using Ruby # Order a Bare Metal Server. # # Build a SoftLayer_Container_Product_Order object for a new # server order and pass it to the SoftLayer_Product_Order API service to order # it. In this care we'll order a Xeon 3460 server with 2G RAM, 100mbit NICs, # 2000GB bandwidth, a 500G SATA drive, CentOS 5 32-bit, and default server # order options. See below for more details. # # Important manual pages: # http://sldn.softlayer.com/reference/datatypes/SoftLayer_Container_Product_Order # http://sldn.softlayer.com/reference/datatypes/SoftLayer_Hardware_Server # http://sldn.softlayer.com/reference/datatypes/SoftLayer_Product_Item_Price # http://sldn.softlayer.com/reference/services/SoftLayer_Product_Order/verifyOrder # http://sldn.softlayer.com/reference/services/SoftLayer_Product_Order/placeOrder # # License: http://sldn.softlayer.com/article/License # Author: SoftLayer Technologies, Inc.<[email protected]> require 'rubygems' require 'softlayer_api' require 'json' # Your SoftLayer API username. USERNAME = 'set me' # Your SoftLayer API key. API_KEY = 'set me' # The number of servers you wish to order in this configuration. quantity = 1 # Where you'd like your new server provisioned. # This can either be the id of the datacenter you wish your new server to be # provisioned in or the string 'FIRST_AVAILABLE' if you have no preference # where your server is provisioned. # Location id 3 = Dallas # Location id 18171 = Seattle # Location id 37473 = Washington, D.C. location = 'AMSTERDAM' # The id of the SoftLayer_Product_Package you wish to order. # In this case the Intel Xeon 3460's package id is 145. package_id = 146 # Build a skeleton SoftLayer_Hardware_Server object to model the hostname and # domain we want for our server. If you set quantity greater then 1 then you # need to define one hostname/domain pair per server you wish to order. hardware = [ { 'hostname' => 'test', # The hostname of the server you wish to order. 'domain' => 'example.org' # The domain name of the server you wish to order. } ] # Build a skeleton SoftLayer_Product_Item_Price objects. These objects contain # much more than ids, but SoftLayer's ordering system only needs the price's id # to know what you want to order. # Every item in SoftLayer's product catalog is assigned an id. Use these ids # to tell the SoftLayer API which options you want in your new server. Use # the getActivePackages() method in the SoftLayer_Account API service to get # a list of available item and price options per available package. prices = [ { 'id' => 172_32 }, # Single Processor Quad Core Xeon 1270 - 3.40GHz (Sandy Bridge) - 1 x 8MB cache w/HT { 'id' => 637 }, # RAM 2 GB DDR2 667 { 'id' => 682 }, # CentOS 5.x (32 bit) { 'id' => 876 }, # Disk Controller { 'id' => 20 }, # 500 GB SATA II { 'id' => 342 }, # 20000 GB Bandwidth { 'id' => 273 }, # 100 Mbps Public & Private Network Uplinks { 'id' => 55 }, # Host Ping { 'id' => 58 }, # Automated Notification { 'id' => 420 }, # Unlimited SSL VPN Users & 1 PPTP VPN User per account { 'id' => 418 }, # Nessus Vulnerability Assessment & Reporting { 'id' => 21 }, # 1 IP Address { 'id' => 57 }, # Email and Ticket { 'id' => 906 } # Reboot / KVM over IP ] # Build a skeleton SoftLayer_Container_Product_Order_Hardware_Server object # containing the order you wish to place. order_template = { 'quantity' => quantity, 'location' => location, 'packageId' => package_id, 'prices' => prices, 'hardware' => hardware } # Declare the API client to use the SoftLayer_Product_Order API service client = SoftLayer::Client.new(username: USERNAME, api_key: API_KEY) product_order_service = client.service_named('SoftLayer_Product_Order') # verifyOrder() will check your order for errors. Replace this with a call to # placeOrder() when you're ready to order. Both calls return a receipt object # that you can use for your records. # # Once your order is placed it'll go through SoftLayer's provisioning process. # When it's done you'll have a new SoftLayer_Virtual_Guest object and CCI ready # to use. begin receipt = product_order_service.verifyOrder(order_template) puts receipt rescue StandardError => exception puts "There was an error in your order: #{exception}" end Regards
It's hard provide a good solution without a few more pieces of information about the problem at hand. What is the client in this case? If you are building some type of web(site/app) then the generation code is being exposed to the user which is a big security problem. If this is something taking place in a standalone compiled application that the user wouldn't be able to access without the original source used to produce the binary being run on the client then all you have to do is choose a generation with a definitive inverse; jwt, rsa, aes, des, ect are all examples of modern encryption algorithms. JWT's are made up of 3 parts [token = encodeBase64(header) + '.' + encodeBase64(payload) + '.' + encodeBase64(signature)] and can have an arbitrarily sized json payload which is what makes them a bit larger but you will have this issue with many methods if you would like the token to relay more then just a true or false value (which is most likely the case). If you are interacting with external resources I would suggest you generate these tokens on a separate service inaccessible to the user.
Default code snippet. Just Replace the above code snippet with the one given below in Startup.Auth.cs class and replace your own Consumer Key and Consumer Secret. app.UseTwitterAuthentication(new TwitterAuthenticationOptions { ConsumerKey = "XXXXXXXXXXXXXXXXXXXXXX", ConsumerSecret = " XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX ", BackchannelCertificateValidator = new Microsoft.Owin.Security.CertificateSubjectKeyIdentifierValidator(new[] { "A5EF0B11CEC04103A34A659048B21CE0572D7D47", // VeriSign Class 3 Secure Server CA - G2 "0D445C165344C1827E1D20AB25F40163D8BE79A5", // VeriSign Class 3 Secure Server CA - G3 "7FD365A7C2DDECBBF03009F34339FA02AF333133", // VeriSign Class 3 Public Primary CA - G5 "39A55D933676616E73A761DFA16A7E59CDE66FAD", // Symantec Class 3 Secure Server CA - G4 "‎add53f6680fe66e383cbac3e60922e3b4c412bed", // Symantec Class 3 EV SSL CA - G3 "4eb6d578499b1ccf5f581ead56be3d9b6744a5e5", // VeriSign Class 3 Primary CA - G5 "5168FF90AF0207753CCCD9656462A212B859723B", // DigiCert SHA2 High Assurance Server C‎A "B13EC36903F8BF4701D498261A0802EF63642BC3" // DigiCert High Assurance EV Root CA }) });
You need to configure CORS in web api application. There are a nuget package for CORS, basically it's an attribute in web api controller and one line configuration in web api config class something like config.EnableCors(). No need any angular configuration. Tks MenusItemController.cs [RoutePrefix("api/menus")] [EnableCors(origins: "*", headers: "*", methods: "*")] public class MenuItemsController : ApiController { [Route("")] public IHttpActionResult Post() { return Ok(new { Result = "post menus" }); } [Route("")] public IHttpActionResult Get() { return Ok(new { Result = "get menus" }); } } WebApiConfig.cs public static void Register(HttpConfiguration config) { // Web API configuration and services // Configure Web API to use only bearer token authentication. config.SuppressDefaultHostAuthentication(); config.Filters.Add(new HostAuthenticationFilter(OAuthDefaults.AuthenticationType)); config.EnableCors(); // Web API routes config.MapHttpAttributeRoutes(); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); } Nuget Packages: Microsoft.AspNet.WebApi.Cors Microsoft.Owin.Cors Microsoft.AspNet.Cors I published my app and worked fine. I didn't do anything configuration or angular request parameters. Try to GET or POST here: http://www.blocodecodigo.com.br/api/menus.
As a matter of general patterns, you might benefit from using ngrx/store for centralized application state management (state, actions, reducers) in combination with ngrx/effects for dealing with "side-effects" like REST-API calls. An "official" example app can be found here: ngrx/example-app It is worth exploring in general if you intend to build complex Angular 2 apps with a reactive (functional) approach. Specific to your case, between a slice of the store State that captures your login status, actions related starting login, login success action, login failure action, and the effect that wraps the API call for authentication and is triggered by the start login action you should have a good framework in place to address managing log-in state. Your services/components can then use observables of slices of the store state to respond to state changes. Then you can use all the powerful RxJs operators to map, compose etc. So adding in additional aspects like triggering actions related to "timeout" or retry can be added to the basic flow of events.
Based on Connection.Credentials Property and CredentialCache.DefaultCredentials Property. The DefaultCredentials property applies only to NTLM, negotiate, and Kerberos-based authentication. DefaultCredentials represents the system credentials for the current security context in which the application is running. For a client-side application, these are usually the Windows credentials (user name, password, and domain) of the user running the application. For ASP.NET applications, the default credentials are the user credentials of the logged-in user, or the user being impersonated. Assuming you are using a different form of authentication, you can create your credentials based on the current logged-in user. Take a look at ICredentials Interface. The code is tad bit too long, so just check it out. Implementing the ICredentials Interface contains only one method that returns NetworkCredential Class. Simply construct the NetworkCredentialClass instance using the current logged-in user's credentials. At the end, you'd have something like this: var hubConnection = new HubConnection(System.Configuration.ConfigurationManager.AppSettings["fwServiceAddress"].ToString()); hubConnection.Credentials = CredentialCache.DefaultCredentials; // returns ICredentials IHubProxy customerHub = hubConnection.CreateHubProxy("customer"); await hubConnection.Start(); await customerHub.Invoke("NewNoteAdded", newNote); Please note by logged-in user I mean logged-in on your MVC application and not on the client computer. If you want the credentials of the active directory user then you must also implement windows authentication on your web app. See How to implement Windows authentication and authorization in ASP.NET.
There are several methods of login and authentication that can be used. Just binding values to form post parameters may not be sufficient or correct. You will find the login form has hidden session identities that must be passed as well as the login data. I find that recording a test two times using as nearly as possible the same inputs and doing the same activities helps. These two tests can then be compared to find the dynamic data that needs to be handled. In a comment the questioner added "I noticed these parameters, n1-43 are different but I have no idea what they represent. How do I handle them?". I can have no idea what they represent as I do not know the website you are testing. You could ask the website developers. Or, better, treat them as dynamic data. Find where the values come from, save them into context variables and use them as needed. This is basic web test development. Here and here are two good articles on what to do. The message about JavaScript not being supported can be ignored. Visual Studio web tests do not support JavaScript or any other "active" parts of a web page, they only support the html part. Your job as a tester is to simulate what the JavaScript does for the specific user journeys you are testing. That simulation is generally just filling in the correct values (via context parameters) in the recorded requests. Unexpected response urls can be due to earlier failures, such as teh login not working. I suggest not worrying about them until all of the other test problems are solved. Then, if you need help ask another new question.
After some talk with a friend, the solution that is more appropriate (in my opinion) is the following. (the code can be cleaned up) // // POST: /Account/Login [HttpPost] [AllowAnonymous] [ValidateAntiForgeryToken] public async Task<IActionResult> Login(LoginViewModel model) { if (ModelState.IsValid) { // Do a rest call to the API Uri _baseUri = new Uri("http://localhost:8000/"); var client = new RestClient(_baseUri + "api/Account/login"); var request = new RestRequest(Method.POST); request.AddHeader("postman-token", "7ee2a21b-70d5-8a68-f0dd-518b8a61ddbf"); request.AddHeader("cache-control", "no-cache"); request.AddHeader("content-type", "application/x-www-form-urlencoded"); request.AddParameter("application/x-www-form-urlencoded", "Email=blah%40gmail.com&password=a1Aa1Aa1A!&=", ParameterType.RequestBody); IRestResponse response = client.Execute(request); // Check the response if (response.StatusCode == HttpStatusCode.OK) { // Grab the cookie for the Identity // this can be replaced by a token in the future String cookie = response.Cookies.Where(c => c.Name == ".AspNetCore.Identity.Application").First().Value; // Store the cookie value to use it in sub-sequent requests HttpContext.Session.SetString("IdentityCookieId", cookie); // Add claims to our new user, an example Name and an example Role const string Issuer = "http://blah.com"; var claims = new List<Claim>(); claims.Add(new Claim(ClaimTypes.Name, "AnonymUser", ClaimValueTypes.String, Issuer)); claims.Add(new Claim(ClaimTypes.Role, "Administrator", ClaimValueTypes.String, Issuer)); var userIdentity = new ClaimsIdentity("SecuredLoggedIn"); userIdentity.AddClaims(claims); var userPrincipal = new ClaimsPrincipal(userIdentity); // Sign in the user creating a cookie with X ammount of Expiry await HttpContext.Authentication.SignInAsync("Cookie", userPrincipal, new AuthenticationProperties { ExpiresUtc = DateTime.UtcNow.AddMinutes(1), IsPersistent = false, AllowRefresh = false }); // Move back to the ReturnUrl or for me always to the dashboard return RedirectToLocal("/dashboard"); } } return View(model); } Ofcourse you must edit the Startup.cs file under ConfigureServices to add services.AddAuthorization(); before your AddMvc(). And under Configure add app.UseCookieAuthentication(new CookieAuthenticationOptions { AuthenticationScheme = "Cookie", LoginPath = new PathString("/account/login/"), AccessDeniedPath = new PathString("/Account/Forbidden/"), AutomaticAuthenticate = true, AutomaticChallenge = true });
You need to add mailer configuration in /config/environments/production.rb as heroku by default runs on production mode. Add this in your /config/environments/production.rb config.action_mailer.smtp_settings = { address: "smtp.gmail.com", port: 587, domain: Rails.application.secrets.domain_name, authentication: "plain", enable_starttls_auto: true, user_name: Rails.application.secrets.email_provider_username, password: Rails.application.secrets.email_provider_password } # ActionMailer Config config.action_mailer.default_url_options = { :host => Rails.application.secrets.domain_name } config.action_mailer.delivery_method = :smtp config.action_mailer.perform_deliveries = true config.action_mailer.raise_delivery_errors = false Then in /config/secrets.yml add the following in production block like, production: domain_name: <%= ENV["DOMAIN_NAME"] %> email_provider_username: <%= ENV["GMAIL_USERNAME"] %> email_provider_password: <%= ENV["GMAIL_PASSWORD"] %> Finally, you must add these 3 environment variables on your heroku app. To add them, go to your heroku app settings page, I'm assuming it should be https://dashboard.heroku.com/apps/dev-match-matt-napper/settings There you'll see a Reveal Config Vars button, click on that and add Key as your environment variable and value as it's value. For example: Key is: GMAIL_USERNAME Value is: Your gmail email address
As others have pointed out in the comments above, there are several flaws in your program. First flaw is your usage of strcat function. If you read the documentation, you would understand that strcat treats the first argument as a destination pointer and hence expects the user to allocate sufficient memory (enough to hold the concatenated string) to the destination pointer. In your case you are passing a string of " " which can accommodate only 1 character. This is the reason, you are getting the buffer overflow or segmentation fault. The second error in your program is in the usage of strcmp function. This function returns 0 (which is defined by false and not true in strbool.h) when two strings are equal. The third problem in your program is in the usage of the function strtok. You need to pass NULL as the first argument from the second call onward to get the pointers to the remaining tokens. So fix these 3 errors first and then try to think about what else needs to be corrected in order to get your desired output.
OK I've been able to make it work. Basically, there are too mistakes in your code : username and shardId are properties of the user, not of the shared notebook. the linked notebook has to be created on the user store, not the business one. Basically the linked notebook is only a 'link' to the 'real' shared business notebook you created. It allows the user to access the business notebook. So here some code that work for me : $client = new \Evernote\AdvancedClient($authToken, false, null, null, false); $userStore = $client->getUserStore(); $userNoteStore = $client->getNoteStore(); $ourUser = $userStore->getUser($authToken); if(!isset($ourUser->accounting->businessId)){ $returnObject = new \stdClass(); $returnObject->status = 400; $returnObject->message = 'Not a buisness user'; return $returnObject; } $bAuthResult = $userStore->authenticateToBusiness($authToken); $bAuthToken = $bAuthResult->authenticationToken; $bNoteStore = $client->getBusinessNoteStore(); $title = 'My title'; $newNotebook = new \EDAM\Types\Notebook(); $newNotebook->name = $title; $newNotebook = $bNoteStore->createNotebook($bAuthToken, $newNotebook); $sharedNotebook = $newNotebook->sharedNotebooks[0]; $newLinkedNotebook = new \EDAM\Types\LinkedNotebook(); $newLinkedNotebook->shareName = $newNotebook->name; $newLinkedNotebook->shareKey = $sharedNotebook->shareKey; $newLinkedNotebook->username = $bAuthResult->user->username; $newLinkedNotebook->shardId = $bAuthResult->user->shardId; $newLinkedNotebook = $userNoteStore->createLinkedNotebook($authToken, $newLinkedNotebook); Hope that helps ! PS: say hi to Chris for me ;)
So I discovered the answer to this question by examining the OWIN cookie authentication middleware source code on CodePlex. Cookies created using the middleware by an MVC controller are created differently from cookies created Web API. MVC cookies are a reference to user information stored in session, and since Web API is completely stateless (no session), cookies created in MVC can not be used in Web API. In addition, it is bad practice to use cookie authentication in Web API anyways; bearer token authentication is a preferable option. In my case where I needed to use Ws-Federation authentication, the solution was to: Add bearer token authentication middleware to my app Create a Web API endpoint (ideally cryptically named) that will securely receive Ws-Federation claims, perform validation to ensure the request really came from your MVC controller, use them to generate a bearer token, and respond with the generated bearer token Upon authenticating in MVC, serialize the claims, and marshal them over to Web API using the endpoint created earlier Add the bearer token to a hidden field in the SPA Many, many thanks to @Juan for providing me with feedback and links to point me in the right direction.
Downloading Content in the Background Yes it is possible. You primary have the options to use a scheduled task or execute a code block with a push notification. In my opinion it's easiest with a scheduled task, but I have have experienced that the task is not always executed. So if your app rely on the background fetch you should check if the data is downloaded at application:willEnterForeground and download data if new data is not available. Here's the link to the Objective-c documentation on the topic: Background Execution Objective-c: The process for creating a configuration object that supports background downloads is as follows: Create the configuration object using the backgroundSessionConfigurationWithIdentifier: method of NSURLSessionConfiguration. Set the value of the configuration object’s sessionSendsLaunchEvents property to YES. if your app starts transfers while it is in the foreground, it is recommend that you also set the discretionary property of the configuration object to YES. Configure any other properties of the configuration object as appropriate. Use the configuration object to create your NSURLSession object. Once configured, your NSURLSession object seamlessly hands off upload and download tasks to the system at appropriate times. If tasks finish while your app is still running (either in the foreground or the background), the session object notifies its delegate in the usual way. If tasks have not yet finished and the system terminates your app, the system automatically continues managing the tasks in the background. If the user terminates your app, the system cancels any pending tasks. When all of the tasks associated with a background session are complete, the system relaunches a terminated app (assuming that the sessionSendsLaunchEvents property was set to YES and that the user did not force quit the app) and calls the app delegate’s application:handleEventsForBackgroundURLSession:completionHandler: method. (The system may also relaunch the app to handle authentication challenges or other task-related events that require your app’s attention.) In your implementation of that delegate method, use the provided identifier to create a new NSURLSessionConfiguration and NSURLSession object with the same configuration as before. The system reconnects your new session object to the previous tasks and reports their status to the session object’s delegate. Since I code using Swift I'll provide some documentation on that. Swift 3.0 Create a Scheduler To initialize a scheduler, call init(identifier:) for NSBackgroundActivityScheduler, and pass it a unique identifier string in reverse DNS notation (nil and zero-length strings are not allowed) that remains constant across launches of your application. let activity = NSBackgroundActivityScheduler(identifier: "com.example.MyApp.updatecheck") The system uses this unique identifier to track the number of times the activity has run and to improve the heuristics for deciding when to run it again in the future. Configure Scheduler Properties There's several properties you could configure, check the API reference for that. E.G: Scheduling an activity to fire once each hour activity.repeats = true activity.interval = 60 * 60 Schedule Activity with scheduleWithBlock: When your block is called, it’s passed a completion handler as an argument. Configure the block to invoke this handler, passing it a result of type NSBackgroundActivityScheduler.Result to indicate whether the activity finished (finished) or should be deferred (deferred) and rescheduled for a later time. Failure to invoke the completion handler results in the activity not being rescheduled. For work that will be deferred and rescheduled, the block may optionally adjust scheduler properties, such as interval or tolerance, before calling the completion handler. activity.scheduleWithBlock() { (completion: NSBackgroundActivityCompletionHandler) in // Perform the activity self.completion(NSBackgroundActivityResult.Finished) } Notes to remember: Apps only get ~ 10 mins (~3 mins as of iOS 7) of background execution - after this the timer will stop firing. As of iOS 7 when the device is locked it will suspend the foreground app almost instantly. The timer will not fire after an iOS 7 app is locked.
This may help you: I fully agree with to use database router. What I did is, I have used single admin interface to handle multiple databases. Note that authentication for all apps are stored in the default database. Settings.py # Define the database manager to setup the various projects DATABASE_ROUTERS = ['manager.router.DatabaseAppsRouter'] DATABASE_APPS_MAPPING = {'app1': 'db1', 'app2':'db2'} DATABASES = { #For login authentication of both app I have used postgres sql 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': 'fail_over', 'USER': 'SomeUser', 'PASSWORD': 'SomePassword', 'HOST': '127.0.0.1', 'PORT': '', }, # Set this parameters according to your database configuration 'db1': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(PROJECT_DIR, 'yourdatabasename.db'), }, # Set this parameters according to your database configuration 'db2' : { 'ENGINE' : 'django_mongodb_engine', 'NAME' : 'my_database' } } Sample Models # Create your models here for app1. class Modelapp1(models.Model): field1 = models.TextField(max_length=100) field2 = models.TextField(max_length=200) class Meta: app_label = 'app1' def __unicode__(self): return self.field1 # Create your models here for app2. class Modelapp2(models.Model): field1 = models.CharField(max_length=25) class Meta: app_label = 'app2' def __unicode__(self): return self.field
After digging I found the issue, initially there was a configuration issue - the above comments helped thanks. The Audiences and Issuers must match your azure site including including the trailing slash. The issue once the configuration had been corrected was that the token which is passed correctly from my App did not get processed at the server side so all Authorized areas where out-of-bounds. This was because of the order of calls in the ConfigureMobileApp method. I was calling the app.UseWebApi method before the app.UseAppServiceAuthentication method, changing the order suddenly had the token being tested again. The dummy site I have working now has the following: public static void ConfigureMobileApp(IAppBuilder app) { HttpConfiguration config = new HttpConfiguration(); //For more information on Web API tracing, see http://go.microsoft.com/fwlink/?LinkId=620686 SystemDiagnosticsTraceWriter traceWriter = config.EnableSystemDiagnosticsTracing(); new MobileAppConfiguration() .UseDefaultConfiguration() .MapApiControllers() .ApplyTo(config); config.MapHttpAttributeRoutes(); // Use Entity Framework Code First to create database tables based on your DbContext //Database.SetInitializer(new EducaterAPIDevInitializer()); // To prevent Entity Framework from modifying your database schema, use a null database initializer // Database.SetInitializer<EducaterAPIDevContext>(null); MobileAppSettingsDictionary settings = config.GetMobileAppSettingsProvider().GetMobileAppSettings(); if (string.IsNullOrEmpty(settings.HostName)) { var options = new AppServiceAuthenticationOptions { SigningKey = ConfigurationManager.AppSettings["SigningKey"], ValidAudiences = new[] { ConfigurationManager.AppSettings["ValidAudience"] }, ValidIssuers = new[] { ConfigurationManager.AppSettings["ValidIssuer"] }, TokenHandler = config.GetAppServiceTokenHandler() }; app.UseAppServiceAuthentication(options); } app.UseWebApi(config); }
I used to do separation here by having a completely separate install of CI within a sub folder called 'admin'. Then the admin part of the site was completely seperate from the main site. This works very well, takes no special set ups and it means security for admin for authorisation and authentication can be set to be very strict on all pages within admin. Nowdays if I were doing this, I would simply keep the admin controllers within the main site in their own folder. This is manageable now because the admin folder would only need a few controllers with most of the functionality within libraries where I can have as many sub folders in the folder structure as I need to keep the roles clean. You can also run both parts of the seperate CI systems through a single index.php and a single install of CI, but personally I find that the complexity added by all the seperate settings for which set of views or controllers to call just a pain, especially for debugging. If I were you, and this is your first time doing this, I would simply do two installs of CI. They can both use the same database quite easily and to set this up is just a case of a few minutes creating a sub folder and a new install.
I'll buck the trend a bit and say you should return 200. The status code 401 is related to HTTP authentication. W3C has the following to say on the status code: The request requires user authentication. The response MUST include a WWW-Authenticate header field (section 14.47) containing a challenge applicable to the requested resource. The client MAY repeat the request with a suitable Authorization header field (section 14.8). If the request already included Authorization credentials, then the 401 response indicates that authorization has been refused for those credentials. If the 401 response contains the same challenge as the prior response, and the user agent has already attempted authentication at least once, then the user SHOULD be presented the entity that was given in the response, since that entity might include relevant diagnostic information. (source) Since your server presumably does not use HTTP authentication itself, you won't be returning a WWW-Authenticate header with a challenge, hence you won't be following this spec correctly. The 3rd party API you are calling may do this correctly, but that is by the by. Your user has requested a page from you, not the third party API directly, and they are authorised to do that. Your server has not decided that they are not worthy of a valid response - someone else's server has just told you that their token is not valid. Given this, I would return a 200. The request has succeeded. Your server is able to return information indicating that the third party API call failed.
From looking into the available documentation the following should be possible. By defining the appropriate scope you can instruct Auth0 to include specific information - claims - within the ID token returned as the outcome of a successful authentication. If all the user profile information is included in the token itself you can then use Knock in such way that it will create the user model instance from the JWT payload itself without any need to query additional stores. By default, Knock assumes the payload as a subject (sub) claim containing the entity's id and calls find on the model. If you want to modify this behaviour, implement within your entity model a class method from_token_payload that takes the payload in argument. class User < ActiveRecord::Base def self.from_token_payload payload # Returns a valid user, `nil` or raise end end (source: Knock Customization) With this approach the token itself is sufficient and there is no further interaction between the Rails API and Auth0. The user model is created from the token and the Auth0 database is not directly accessed by Rails API, it just uses the information stored there and surfaces on the token. It should be possible to go with other approaches with more direct interaction with the Auth0 database. If you need to go down that route you should look into the Management API (user related endpoints) as a way for you to interact with the Auth0 database from your own application.
Try using this .factory('AuthenticationService', ['Base64', '$http', '$cookieStore', '$rootScope', function (Base64, $http, $cookieStore, $rootScope) { var service = {}; this.Login = function (username, password) { var authdata = Base64.encode(username + ':' + password); $rootScope.globals = { currentUser: { username: username, authdata: authdata } }; $http.defaults.headers.common['Authorization'] = 'Basic ' + authdata; $cookieStore.put('globals', $rootScope.globals); $http.post('http://localhost:8080/v1/login', { username: username, password: password }) .success(function (response) { return response; }); }; this.ClearCredentials = function () { $rootScope.globals = {}; $cookieStore.remove('globals'); $http.defaults.headers.common.Authorization = 'Basic '; }; }]) Controller .controller('LoginController', ['$scope', '$rootScope', '$location', 'AuthenticationService', function ($scope, $rootScope, $location, AuthenticationService) { // reset login status AuthenticationService.ClearCredentials(); $scope.login = function () { $scope.dataLoading = true; var response = AuthenticationService.Login($scope.username, $scope.password); if(response.success) { $location.path('/'); } else { $scope.error= response.message; $scope.dataLoading = false; } }; }]);
Please remove ApiKey = ApiKey, from your DriveService. You are confusing the client library its trying to use a public api key when it should be using your OAuth credentials. seen this a couple of times posted an issue on the library Creating service with credentials and ApiKey Update could also be an issue wit your code My drive Auth code: /// <summary> /// This method requests Authentcation from a user using Oauth2. /// Credentials are stored in System.Environment.SpecialFolder.Personal /// Documentation https://developers.google.com/accounts/docs/OAuth2 /// </summary> /// <param name="clientSecretJson">Path to the client secret json file from Google Developers console.</param> /// <param name="userName">Identifying string for the user who is being authentcated.</param> /// <returns>DriveService used to make requests against the Drive API</returns> public static DriveService AuthenticateOauth(string clientSecretJson, string userName) { try { if (string.IsNullOrEmpty(userName)) throw new Exception("userName is required."); if (!File.Exists(clientSecretJson)) throw new Exception("clientSecretJson file does not exist."); // These are the scopes of permissions you need. It is best to request only what you need and not all of them string[] scopes = new string[] {DriveService.Scope.DriveReadonly}; // Modify your Google Apps Script scripts' behavior UserCredential credential; using (var stream = new FileStream(clientSecretJson, FileMode.Open, FileAccess.Read)) { string credPath = System.Environment.GetFolderPath(System.Environment.SpecialFolder.Personal); credPath = Path.Combine(credPath, ".credentials/apiName"); // Requesting Authentication or loading previously stored authentication for userName credential = GoogleWebAuthorizationBroker.AuthorizeAsync(GoogleClientSecrets.Load(stream).Secrets, scopes, userName, CancellationToken.None, new FileDataStore(credPath, true)).Result; } // Create Drive API service. return new DriveService(new BaseClientService.Initializer() { HttpClientInitializer = credential, ApplicationName = "Drive Authentication Sample", }); } catch (Exception ex) { Console.WriteLine("Create Oauth2 DriveService failed" + ex.Message); throw new Exception("CreateOauth2DriveFailed", ex); } }
Preparation to run the demo code: 1.net core tool [Required visual studio2015 update 3] 2.Registry an app in the Azure AD and create service principal for accessing the resource. More detail please refer to the document. 3.Prepare the authentication file with the content as following format. The values can be got from the step 2. subscription=########-####-####-####-############ client=########-####-####-####-############ key=XXXXXXXXXXXXXXXX tenant=########-####-####-####-############ managementURI=https\://management.core.windows.net/ baseURL=https\://management.azure.com/ authURL=https\://login.windows.net/ graphURL=https\://graph.windows.net/ 4. Change the azure authentication file path AzureCredentials credentials = AzureCredentials.FromFile("Full path of your AzureAuthFile"); I create a demo use the common console application (Preparation 2,3 also needed). I also have tested it. The following is the detail steps: Install the Microsoft Azure Management Client Library 2. Add the demo code as following AzureCredentials credentials = AzureCredentials.FromFile(@"full file path"); var azure = Azure .Configure() .WithLogLevel(HttpLoggingDelegatingHandler.Level.BASIC) .Authenticate(credentials) .WithDefaultSubscription(); foreach (var virtualMachine in azure.VirtualMachines.ListByGroup("resource Group name").Where(virtualMachine => virtualMachine.ComputerName.Equals("vm name"))) { //virtualMachine.Start(); virtualMachine.PowerOff(); Console.ReadKey(); } Debug the demo project
<dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>${swagger.version}</version> </dependency> <!-- https://mvnrepository.com/artifact/io.springfox/springfox-swagger-ui --> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> <version>${swagger.version}</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-spi</artifactId> <version>${swagger.version}</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-core</artifactId> <version>${swagger.version}</version> </dependency> <!-- https://mvnrepository.com/artifact/io.springfox/springfox-spring-web --> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-spring-web</artifactId> <version>${swagger.version}</version> </dependency> Convert the above dependency to gradle. Version i used is 2.3.1 package XXXX; import springfox.documentation.swagger2.annotations.EnableSwagger2; @EnableSwagger2 public class SwaggerConfiguration { } We enabled the Swagger Config as stated above. You can add Docket bean for custom headers: @Bean public Docket docket() { Parameter parameterAuthorization = new ParameterBuilder().name("Authorization").description("Authentication of the API User") .modelRef(new ModelRef("string")).parameterType("header").required(true).build(); Parameter parameterClientUserId = new ParameterBuilder().name("user_id").description("Client user identifier") .modelRef(new ModelRef("string")).parameterType("header").required(true).build(); return new Docket(DocumentationType.SWAGGER_2).select().apis(RequestHandlerSelectors.any()) .paths(PathSelectors.any()).build() .globalOperationParameters(Lists.newArrayList(parameterClientUserId, parameterAuthorization)); } And finally import the Swagger config in Main Application Class on Spring boot @Import({SwaggerConfiguration.class})
It really depends what you mean by "will it run". The MongoDB v1.1.0.4184 C# driver was released in June, 2011 and dates to roughly the MongoDB 1.8 server release timeframe. This driver version is certainly no longer tested or supported, and will not be fully compatible with newer server features like the WiredTiger storage engine (default in MongoDB 3.2+) or SCRAM-SHA-1 authentication (default in MongoDB 3.0+). The MongoDB documentation includes a reference table with recommended version(s) of the drivers for use with a specific version of MongoDB: C#/.NET Driver Compatibility. If this is a production system I would strongly recommend taking the time to update and test a supported version of the C# driver for use with MongoDB 3.2 (eg. the v1.11 C# driver). I suspect it is very likely you will encounter fixed (or novel) bugs/behaviour using a driver that is more than five years old. Your application won't be able to take advantage of many of the newer server features, and this obsolete driver predates specifications such as standard Server Discovery and Monitoring (SDAM) behaviour. That said, assuming you aren't using any features the driver isn't aware of your code may continue to run (or at least appear to run) successfully. In my opinion doing so is a high risk deployment strategy.
I suggest you use an approach based on inheritance. You can add a type in users table and Rails will manage it for you. example_timestamp_add_type_to_users.rb class AddTypeToUsers < ActiveRecord::Migration[5.0] def change add_column :users, :type, :string, null: false end end Using this you can add as user types as you want: models/job_seeker.rb class JobSeeker < User has_one :job_seeker_profile, dependent: :destroy after_create :profile private def profile create_job_seeker_profile end end models/company_owner.rb class CompanyOwner < User has_one :company_owner_profile, dependent: :destroy after_create :profile private def profile create_company_owner_profile end end models/admin.rb class Admin < User end This approach will allows you to use the authentication methods based on user type: for job seekers: current_job_seeker authenticate_job_seeker! for company owners: current_company_owner authenticate_company_owner! for admins: current_admin authenticate_admin! You can also build a different registration and/or sign-in forms for each user type: config/routes.rb Rails.application.routes.draw do devise_for :admins devise_for :job_seekers devise_for :company_owners end
Cryptography in Java is a very complex problem. Out of the box, the JVM comes installed with certain "providers". The Provider in java defines an interface for cryptography operations and there are a list of them available for use in the JVM. Each provider implementation has different supported algorithms and keysizes. When you are calling Cipher.getInstance, the JVM looks at all of the installed Providers and chooses one to use that supports the algorithm you requested. In your case, the exception is telling you that there was no provider registered with the JVM which supports the type of encryption you are doing. This could be due to a number of reasons. RSA isnt supported with the provider it selected The key size / type isnt supported When I have wanted to to cryptography in java I use BouncyCastle as a Provider. You can specify the bouncy castle provider using the other Cipher.getInstance method or use BouncyCastle's helper APIs so that you dont have to use the Cipher class directly. Check out an RSA encrypt/decrypt example here. As a side note, if you plan on using AES 256 or higher and are using the Oracle JDK, you must install Unlimited Strength JCE components. http://docs.oracle.com/javase/7/docs/technotes/guides/security/SunProviders.html
Configure cookie name: services.AddSession(options => { options.CookieName = ".MyProjectName.Session"; options.IdleTimeout = TimeSpan.FromMinutes(120); }); And try with following attribute: [AttributeUsage(AttributeTargets.Method, Inherited = true, AllowMultiple = false)] public class CheckSessionOutAttribute : ActionFilterAttribute { public override void OnActionExecuting(ActionExecutingContext filterContext) { HttpContext context = HttpContext.Current; if (context.Session != null) { if (context.Session.IsNewSession) { string sessionCookie = context.Request.Headers["Cookie"]; if ((sessionCookie != null) && (sessionCookie.IndexOf("MyProjectName.Session") >= 0)) { FormsAuthentication.SignOut(); string redirectTo = "~/Account/Login"; //YOUR LOGIN PAGE HERE if (!string.IsNullOrEmpty(context.Request.RawUrl)) { redirectTo = string.Format("~/Account/Login?ReturnUrl={0}", HttpUtility.UrlEncode(context.Request.RawUrl)); filterContext.Result = new RedirectResult(redirectTo); return; } } } } base.OnActionExecuting(filterContext); } } Usage: [CheckSessionOut] public ViewResult Index() { }
Try changing like 950 segment to : def getCurrentCifUser() { def cifUser = CifUser.find("from com.vastpalaso.app.cif.CifUser cu where cu.userDetails.user=?", [ springSecurityService.currentUser ]) return cifUser } Now going back to your original question How about you change your HTTPURLConnection to be more like this on the android app: public class BasicAuthenticationExample { public static final String URL_SECURE = "[secure url]"; public static final String URL_LOGOUT = "[logout url]"; private HttpClient client = null; /** * Constructor for BasicAuthenticatonExample. */ public BasicAuthenticationExample(String host, int port) { client = new HttpClient(); List<String> authPrefs = new ArrayList<String>(2); authPrefs.add(AuthPolicy.DIGEST); authPrefs.add(AuthPolicy.BASIC); client.getParams().setParameter(AuthPolicy.AUTH_SCHEME_PRIORITY, authPrefs); client.getParams().setAuthenticationPreemptive(true); client.getState().setCredentials(new AuthScope(host, port, AuthScope.ANY_REALM), new UsernamePasswordCredentials(ConnectionConstants.USERNAME, ConnectionConstants.PASSWORD)); } public static void main(String[] args) { BasicAuthenticationExample example = new BasicAuthenticationExample("localhost", 8080); // create a GET method that reads a file over HTTPS, we're assuming // that this file requires basic authentication using the realm above. GetMethod get = new GetMethod(URL_SECURE); // Tell the GET method to automatically handle authentication. The // method will use any appropriate credentials to handle basic // authentication requests. Setting this value to false will cause // any request for authentication to return with a status of 401. // It will then be up to the client to handle the authentication. get.setDoAuthentication(true); try { // execute the GET int status = example.client.executeMethod(get); // print the status and response System.out.println(status + example.client.getState().toString() + "\n" + get.getStatusLine() + get.getResponseBodyAsString()); example.logout(); } catch (HttpException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } finally { // release any connection resources used by the method get.releaseConnection(); } } private void logout() throws HttpException, IOException { // create a GET method that reads a file over HTTPS, we're assuming // that this file requires basic authentication using the realm above. System.out.println("Logging out..."); System.out.println("--------------"); GetMethod get = new GetMethod(URL_LOGOUT); try { // execute the GET int status = client.executeMethod(get); // print the status and response System.out.println(status + client.getState().toString() + "\n" + get.getStatusLine() + get.getResponseBodyAsString()); } finally { // release any connection resources used by the method get.releaseConnection(); } } } Also a point of reference here
How can I achieve the same thing in ASP.NET Core First you need an authentication middleware, for your case it may be basic authentication. For Aspnet Core there is no built-in basic authentication middleware. A soluton is here or you can implement own authentication middleware like this. I stored my query data (Account - in this situation) in the actionContext, and I can access to it later in Controllers. Two possible ways are coming to my mind: Adding parameter into HttpContext.Items Adding claim to current User.Identity To implement this you can use ClaimsTransformation or custom middleware after authentication middleware. If you go with your own implementation you can also use HandleAuthenticateAsync method. Update It seems right place to save query data is HandleAuthenticateAsync. If you use @blowdart's basic authentication solution, your code might be something like below: ..... await Options.Events.ValidateCredentials(validateCredentialsContext); if (validateCredentialsContext.Ticket != null) { HttpContext.Items[HeaderFields.Account] = person; // assuming you retrive person before this Logger.LogInformation($"Credentials validated for {username}"); return AuthenticateResult.Success(validateCredentialsContext.Ticket); }
First find out your PHP version. In my case 5.6. php --version PHP 5.6.27 (cli) (built: Oct 15 2016 21:31:59) Copyright (c) 1997-2016 The PHP Group Zend Engine v2.6.0, Copyright (c) 1998-2016 Zend Technologies Then: sudo yum search mcrypt And choose the best one for your version from the list, I used php56w-mcrypt. $ sudo yum search mcrypt Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile ..... output truncated .... libmcrypt-devel.i686 : Development libraries and headers for libmcrypt libmcrypt-devel.x86_64 : Development libraries and headers for libmcrypt libtomcrypt-devel.i686 : Development files for libtomcrypt libtomcrypt-devel.x86_64 : Development files for libtomcrypt libtomcrypt-doc.noarch : Documentation files for libtomcrypt php-mcrypt.x86_64 : Standard PHP module provides mcrypt library support php55w-mcrypt.x86_64 : Standard PHP module provides mcrypt library support # either of these are fine: php56-php-mcrypt.x86_64 : Standard PHP module provides mcrypt library support php56w-mcrypt.x86_64 : Standard PHP module provides mcrypt library support php70-php-mcrypt.x86_64 : Standard PHP module provides mcrypt library support php70w-mcrypt.x86_64 : Standard PHP module provides mcrypt library support php71-php-mcrypt.x86_64 : Standard PHP module provides mcrypt library support libmcrypt.i686 : Encryption algorithms library libmcrypt.x86_64 : Encryption algorithms library libtomcrypt.i686 : A comprehensive, portable cryptographic toolkit libtomcrypt.x86_64 : A comprehensive, portable cryptographic toolkit mcrypt.x86_64 : Replacement for crypt() ``` Finally: sudo service httpd restart
Based on the P4 Node module documentation it does not seem to offer any sort of authentication support and assumes that you've already authenticated before trying to use it. Using the command line you would do something like: echo PASSWORD|p4 login and in the APIs the equivalent to that command line redirect would be to use the "prompt" callback to provide the password. Further, once you've authenticated as a super user, you can do: p4 login USERNAME to gain a login ticket for that user without being prompted for their password. Since you're writing a tool that wants to run commands as other users without prompting them for a password, you'd probably want to make use of this functionality. Unfortunately, I can't offer any specific suggestions on how to architect your tool without knowing a lot more about it -- where is it running? Who's running it? What does it need to do? What's the security configuration of your server? How much do you trust people on your internal network? Etc.
I haven't work with SockJS. But I have implemented some small mechanism for pure WebSocket. Maybe it will be helpful First of all, OAUTH configuration: @Configuration @EnableAuthorizationServer @EnableResourceServer public class OAuth2Configuration extends AuthorizationServerConfigurerAdapter { private static Logger logger = LoggerFactory.getLogger(OAuth2Configuration.class); private UsersService service = new UsersService(); @Autowired AuthenticationManagerBuilder authenticationManager; @Autowired UserDetailsService userDetailsService; @Override public void configure( AuthorizationServerEndpointsConfigurer endpoints) throws Exception { authenticationManager.userDetailsService(this.userDetailsService); endpoints.authenticationManager((Authentication authentication) -> authenticationManager.getOrBuild().authenticate(authentication)); } @Override public void configure(ClientDetailsServiceConfigurer clients) throws Exception { clients.inMemory().withClient("application_name") .authorizedGrantTypes("password", "authorization_code", "refresh_token") .scopes("write", "read", "trust") .secret("secret").accessTokenValiditySeconds(24 * 60 * 60); } @Bean public UserDetailsService userDetailsService() { return (username) -> { return service.getByName(username).map(account -> new User(account.getName(), account.getPassword(), account.getAuthorities())).orElseThrow( () -> new RuntimeException("User not found") ); }; } } second, angular2 authorization: let headers = new Headers(); headers.append("Authorization", "Basic appname:secret"); this.http.post("localhost:8080/oauth_endpoint?grant_type=password&scope=trust&username=" + login + "&password=" + password , "", { headers: headers }) .map((response: Response) => response.json()) .subscribe(response => { this.accessToken = response.access_token; //will be used for socket } ); third socket configuration: @Configuration @EnableWebSocket public class SocketConfig implements WebSocketConfigurer { public void registerWebSocketHandlers(WebSocketHandlerRegistry registry) { registry.addHandler(socketHandler(), "/websocket").setAllowedOrigins("*"); } @Bean public SocketHandler socketHandler() { return new SocketHandler(); } } public class SocketHandler extends TextWebSocketHandler { @Autowired private CheckTokenEndpoint checkTokenEndpoint; private static Logger logger = LoggerFactory.getLogger(SocketHandler.class); public void afterConnectionEstablished(WebSocketSession session) { logger.info("New peer connected: " + session.getId()); } public void handleMessage(WebSocketSession session, WebSocketMessage<?> message) throws Exception { logger.debug("Peer is trying to authenticate"); String token = message.getPayload().toString(); try { checkTokenEndpoint.checkToken(token); logger.info("New peer authenticated. "); } catch (Exception e) { logger.warn("Peer unauthenticated!"); session.close(); //closing connection when provided token doesn't match } } } and last, establishing connection via angular2: let ws = new WebSocket("localhost:8080/websocket", []); ws.onopen = (event: Event) => { this.send( ws.send() }); This code may not work if you just copy/paste. I had several other cases to deal with (ie my websocket is reestablishing connection). Because they are out of the question scope, I removed them manually while placing code here.
HttpClientCertificate, as the name suggests, contains the certificate (typically following X.509). So it does not contain "arbitrary" data encrypted with the PIN/private key on the smartcard (I assume you are actually trying to refer to session authentication data). The certificate consists of only static data, which is the public key that the certificate was issued for, some metadata (such as identifying information for the key pair, parameters of the public-key cryptosystem, validity periods, usage constraints, etc.), and a signature over that static data, which is either issued by a cerificate authority (in case a certificate authority issued the certificate) or created with the private key that corresponds to the public key in the certificate (in case of a self-signed certificate / CA root certificate, though this should not be the case for TLS client certificates). So HttpClientCertificate neither contains the private key associated with the certificate nor any dynamic data signed by the private key in order to authenticate the TLS session.
Some person have same as issue you are getting. Try to do the following may be below procedure will solve your issue. A.) login from gmail and visited the link https://www.google.com/settings/security/lesssecureapps and turned on less secure apps. B.) Edit .env file as like below: MAIL_DRIVER=smtp MAIL_HOST=smtp.gmail.com MAIL_PORT=587 MAIL_USERNAME=username //i.e. [email protected] MAIL_PASSWORD=password //Gmail accounts password MAIL_ENCRYPTION=ssl C.) In you Controller, Write down as following: $rawData = request::all(); Mail::queue('program.meeting.emailInvite', $rawData, function($message) use ($rawData) { $message->from('[email protected]', 'Echosofts')->to(array_map('trim', explode(',', $rawData['all_email_id'])))->subject($rawData['mail_title']); }); Then email was working fine except the sender email ID was my google account ([email protected]) instead of [email protected]. D.) To overcome the sender email changing problem, I visited my google account and do the following: "Setting icon"-> Settings -> Accounts and Import->Send mail as->Add another email address your own.
I did report a similar issue when using mobile hub helper here https://github.com/aws/aws-mobilehub-helper-ios/issues/14 But that was just data stored in NSUserDefaults by the AWSSignInProviders for Google and Facebook, I did not experience being able to sync data from a different identityId. Are you sure that is what you are seeing? One comment, if you use different authentication providers you may accidentally merge the two identities (login with google then with facebook). So make sure that you are looking at different identityIds because if they are merged then they get their sync datasets merged. Lastly... getIdentityId is not enough to switch identities. You need to follow that with a call got get the credentials, if not the credentials never goes and calls "logins" with your identityProviderManager and gets the new authenticated state. - so login is get-id followed by get credentials for id. Logout would be the same if you wanted to be an unauthenticated user (log out so your logins dictionary is empty, then getid, getcredentials... and you will be have unauthenticated credentials.)
In general, background vibration (from screen off state) is not directly available for web apps unless you are using Alarm API or Notification. But a timed background vibration can be easily tricked out using Power API and web workers.I am sharing a sample code: main.js window.onload = function() { document.addEventListener('tizenhwkey', function(e) { if (e.keyName === "back") { try { tizen.application.getCurrentApplication().hide(); }catch (ignore) {} } }); var mainPage = document.querySelector('#main'); mainPage.addEventListener("click", function() { var contentText = document.querySelector('#content-text'); var worker; //web worker worker = new Worker("js/worker.js"); //load from directory worker.onmessage = function(event) { //receive data from worker tizen.power.turnScreenOn(); // forcefully turn the screen on setTimeout(function (){ contentText.innerHTML = event.data; // time counter navigator.vibrate(1000); }, 500); // just being safe (vibrate after screen is on) }; }); }; worker.js var i=0; function timedCount() { i=i+1; postMessage(i); //send data setTimeout("timedCount()",5000); // set vibration interval (or use specific time) } timedCount(); add these lines on your config.xml <tizen:privilege name="http://tizen.org/privilege/power"/> <tizen:setting background-support="enable" encryption="disable" hwkey-event="enable"/> Once background-support is enabled the app would response while minimized, when you are applying web workers. Using getCurrentApplication().hide() instead of getCurrentApplication().exit() on back key event would do the task for you. Check Vibration Guide for different types of vibration.
I presume that the encryption being used is the old, very weak, encryption that was part of the original PKZIP format. That encryption method has a 12-byte salt header before the compressed data. From the PKWare specification: After the header is decrypted, the last 1 or 2 bytes in Buffer should be the high-order word/byte of the CRC for the file being decrypted, stored in Intel low-byte/high-byte order. Versions of PKZIP prior to 2.0 used a 2 byte CRC check; a 1 byte CRC check is used on versions after 2.0. This can be used to test if the password supplied is correct or not. It was originally two bytes in the 1.0 specification, but in the 2.0 specification, and in the associated version of PKZIP, the check value was changed to one byte in order to make password searches like what you are doing more difficult. The result is that about one out of every 256 random passwords will result in passing that first check, and then proceeding to try to decompress the incorrectly decrypted compressed data, only then running into an error. So it's far, far more than two passwords that will be "accepted". However it won't take very many bytes of decompressed data to detect that the password was nevertheless incorrect.
This is a reply from 1RedOne (Reddit) user that helped me out: For one, let's wrap your whole -URI in single quotes and remove the double quotes. Your URL is probably messed up, which isn't helping things. $uri = 'https://myhost/MAM/wfservice/workers/?ip=&port=&newStatus=Deactivating' $response = invoke-restmethod -uri $uri-Method POST -Body $json -Credential $cred -ContentType 'application/json' 2. Furthermore, your call from fiddler uses basic authentication, and is probably incompatible with using a -Credential object. Try replacing your credentials with this format. $user = "yourusername" $pass = 'yourPassWord' # Build auth header $base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $user, $pass))) # Set proper headers $headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]" $headers.Add('Authorization',('Basic {0}' -f $base64AuthInfo)) Then, reference the $header object within your Invoke-RestMethod, like so. $response = invoke-restmethod -uri $uri- Method POST ` -Body $json -Header $headers -ContentType 'application/json' That's it. It worked like a charm!
I managed to find a solution on how to enable AAD authorization to Azure REST API App. Just in case anyone has the same challenge, I hope this will be helpful. These are the steps I did: 1) In App services -> Authentication/authorization App Service Authentication => On Action to take when request is not authenticated => Login with AAD Configured AAD with Express settings (there you have to create Azure AD App for you API App - i.e. "App registration" for your service) 2) In Azure Active Directory -> App registrations Add registration for your client app Edit Manifest of your client app - in the requiredResourceAccess section you must add information about REST API App: resourceAppId -> insert REST API App id here resourceAccess {id} -> OauthPermission id value of REST API (you can get it in REST API's manifest!) 3) In your client application generate your REST client using Autorest (from solution explorer: Add\REST API client) or create it manually add Microsoft.IdentityModel.Clients.ActiveDirectory nuget pack get and use token to access your API with code similar to this: //request (..) var tokenCreds = getToken(); ServiceClientCredentials credentials = tokenCreds; using (var client = new YourAPI(credentials)) { ... } (..) //getting token private static TokenCredentials getToken() { //get this from Federation Metadata Document in //Azure Active Directory App registrations -> Endpoints var authority = "f1..."; //Identifier of the target resource that is the recipient of the requested token var resource = "https://yourapi.azurewebsites.net"; //client application id (see Azure Active Directory App registration //for your client app var clientId = "a71..."; //return url - not relevant for Native apps (just has to be valid url) var redirectUri = "https://just-some-valid-url.net"; AuthenticationContext authContext = new AuthenticationContext(string.Format ("https://login.windows.net/{0}", authority)); AuthenticationResult tokenAuthResult = authContext.AcquireTokenAsync(resource, clientId, new Uri(redirectUri), new PlatformParameters(PromptBehavior.Auto)).Result; return new TokenCredentials(tokenAuthResult.AccessToken); }
I decided to put this in an answer as it might not fit into a single comment. First of all, lets go back to basics - HTTP status codes. There are two main codes that you are interested in when talking about authentication and authorization - 401 and 403. From RFC 7235 spec: 3.1. 401 Unauthorized (https://www.rfc-editor.org/rfc/rfc7235#section-3.1) The 401 (Unauthorized) status code indicates that the request has not been applied because it lacks valid authentication credentials for the target resource. 6.5.3. 403 Forbidden (https://www.rfc-editor.org/rfc/rfc7231#section-6.5.3) The 403 (Forbidden) status code indicates that the server understood the request but refuses to authorize it. A server that wishes to make public why the request has been forbidden can describe that reason in the response payload (if any). In other words, 401 means that there is a problem with authentication (either user is not authenticated or is authenticated incorrectly). One can provide valid credentials and try again. At the same time, 403 means there are problems with permissions. Server knows who the user is but denies access - one should not try again with the same credentials. OWIN CookieAuthentication just sits there and listens for 401 error code being returned. If it detects such code, response is replaced with redirect to login page maintaining return address. Despite the name of AuthorizeAttribute, it actually generates 401 status code. https://github.com/ASP-NET-MVC/aspnetwebstack/blob/master/src/System.Web.Http/AuthorizeAttribute.cs#L155 Therefore, user is taken to login page. If you want to change that, you might need to implement your own AuthorizeAttribute. Then you could check if user has already logged in and return 403 status. If the user hasn't logged in, just return 401.
Firstly, you'll want to add a system property to WildFly's standalone.xml to specify the location of the Kerberos configuration file: ... </extensions> <system-properties> <property name="java.security.krb5.conf" value="/path/to/krb5.conf"/> </system-properties> ... I'm not going to go into the format of the krb5.conf file here, as it is dependent on your own implementation of Kerberos. What is important is that it contains the default realm and network location of the KDC. On Linux you can normally find it at /etc/krb5.conf or /etc/security/krb5.conf. If you're running WildFly on Windows, then make sure you use forward-slashes in your path, e.g. "C:/Source/krb5.conf" Secondly, add two new security domains to standalone.xml - one called "Client" which is used by ZooKeeper, and another called "host", which is used by WildFly. Do not ask me why (it caused me so much pain) but the name of the "Client" security domain must match that defined in Zookeeper's JAAS client configuration file on the server. If you've set up with Ambari, "Client" is the default name. Also note that you cannot simply provide a jaas.config file as a system property, you must define it here: <security-domain name="Client" cache-type="default"> <login-module code="com.sun.security.auth.module.Krb5LoginModule" flag="required"> <module-option name="useTicketCache" value="true"/> <module-option name="debug" value="true"/> </login-module> </security-domain> <security-domain name="host" cache-type="default"> <login-module code="org.jboss.security.negotiation.KerberosLoginModule" flag="required" module="org.jboss.security.negotiation"> <module-option name="useTicketCache" value="true"/> <module-option name="debug" value="true"/> <module-option name="refreshKrb5Config" value="true"/> <module-option name="addGSSCredential" value="true"/> </login-module> </security-domain> The module options will vary depending on your implementation. I'm getting my tickets from the default Java ticket cache, which is defined in the java.security file of your JRE, but you can supply a keytab here if you want. Note that setting storeKey to true broke my implementation. Check the Java documentation for all of the options. Note that each security domain uses a different login module: this is not by accident - Phoenix does not know how to use the org.jboss... version. Now you need to provide WildFly with the org.apache.phoenix.jdbc.PhoenixDriver class in phoenix-<version>-client.jar. Create the following directory tree under the WildFly directory: /modules/system/layers/base/org/apache/phoenix/main/ In the main directory, paste the phoenix--client.jar which you can find on the server (e.g. /usr/hdp/<version>/phoenix/client/bin) and create a module.xml file: <?xml version="1.0" ?> <module xmlns="urn:jboss:module:1.1" name="org.apache.phoenix"> <resources> <resource-root path="phoenix-<version>-client.jar"> <filter> <exclude-set> <path name="javax" /> <path name="org/xml" /> <path name="org/w3c/dom" /> <path name="org/w3c/sax" /> <path name="javax/xml/parsers" /> <path name="com/sun/org/apache/xerces/internal/jaxp" /> <path name="org/apache/xerces/jaxp" /> <path name="com/sun/jersey/core/impl/provider/xml" /> </exclude-set> </filter> </resource-root> <resource-root path="."> </resource-root> </resources> <dependencies> <module name="javax.api"/> <module name="sun.jdk"/> <module name="org.apache.log4j"/> <module name="javax.transaction.api"/> <module name="org.apache.commons.logging"/> </dependencies> </module> You also need to paste the hbase-site.xml and core-site.xml from the server into the main directory. These are typically located in /usr/hdp/<version>/hbase/conf and /usr/hdp/<version>/hadoop/conf. If you don't add these, you will get a lot of unhelpful ZooKeeper getMaster errors! If you want the driver to log to the same place as WildFly, then you should also create a log4j.xml file in the main directory. You can find an example elsewhere on the web. The <resource-root path="."></resource-root> element is what adds those xml files to the classpath when deployed by WildFly. Finally, add a new datasource and driver in the <subsystem xmlns="urn:jboss:domain:datasources:2.0"> section. You can do this with the CLI or by directly editing standalone.xml, I did the latter: <datasource jndi-name="java:jboss/datasources/PhoenixDS" pool-name="PhoenixDS" enabled="true" use-java-context="true"> <connection-url>jdbc:phoenix:first.quorumserver.fqdn,second.quorumserver.fqdn:2181/hbase-secure</connection-url> <connection-property name="phoenix.connection.autoCommit">true</connection-property> <driver>phoenix</driver> <validation> <check-valid-connection-sql>SELECT 1 FROM SYSTEM.CATALOG LIMIT 1</check-valid-connection-sql> </validation> <security> <security-domain>host</security-domain> </security> </datasource> <drivers> <driver name="phoenix" module="org.apache.phoenix"> <xa-datasource-class>org.apache.phoenix.jdbc.PhoenixDriver</xa-datasource-class> </driver> </drivers> It's important that you replace first.quorumserver.fqdn,second.quorumserver.fqdn with the correct ZooKeeper quorum string for your environment. You can find this in hbase-site.xml in the HBase configuration directory: hbase.zookeeper.quorum. You don't need to add Kerberos information to the connection URL string! tl;dr Make sure that hbase-site.xml and core-site.xml are in your classpath. Make sure that you have a <security-domain> with a name that ZooKeeper expects (probably "Client"), that uses the com.sun.security.auth.module.Krb5LoginModule. The Phoenix connection URL must contain the entire ZooKeeper quorum. You can't miss one server out! Make sure it matches the value in hbase-site.xml. References: Using Kerberos for Datasource Authentication Phoenix data source configuration by Mark S
Check the status WebException.Status This will let you know what specific web exception has occured. Update: Try change the request.Method = "HEAD"; to GET and try. Try with a unavailable (404) url, compare the status. Check whether anything is blocking your request. This is how i manage in my code, i am handling using only ftp specific status.'CommStatus' is an ENUM with error codes which is available in whole application. catch (WebException ex) { FtpWebResponse response = (FtpWebResponse)ex.Response; switch(response.StatusCode) { case FtpStatusCode.ActionNotTakenFileUnavailable: return CommStatus.PathNotFound; case FtpStatusCode.NotLoggedIn: return CommStatus.AuthenticationError; default: return CommStatus.UnhandledException; } } Below are the available Status of WebException. CacheEntryNotFound This API supports the product infrastructure and is not intended to be used directly from your code. The specified cache entry was not found. ConnectFailure This API supports the product infrastructure and is not intended to be used directly from your code. The remote service point could not be contacted at the transport level. ConnectionClosed This API supports the product infrastructure and is not intended to be used directly from your code. The connection was prematurely closed. KeepAliveFailure This API supports the product infrastructure and is not intended to be used directly from your code. The connection for a request that specifies the Keep-alive header was closed unexpectedly. MessageLengthLimitExceeded This API supports the product infrastructure and is not intended to be used directly from your code. A message was received that exceeded the specified limit when sending a request or receiving a response from the server. NameResolutionFailure This API supports the product infrastructure and is not intended to be used directly from your code. The name resolver service could not resolve the host name. Pending This API supports the product infrastructure and is not intended to be used directly from your code. An internal asynchronous request is pending. PipelineFailure This API supports the product infrastructure and is not intended to be used directly from your code. The request was a piplined request and the connection was closed before the response was received. ProtocolError This API supports the product infrastructure and is not intended to be used directly from your code. The response received from the server was complete but indicated a protocol-level error. For example, an HTTP protocol error such as 401 Access Denied would use this status. ProxyNameResolutionFailure This API supports the product infrastructure and is not intended to be used directly from your code. The name resolver service could not resolve the proxy host name. ReceiveFailure This API supports the product infrastructure and is not intended to be used directly from your code. A complete response was not received from the remote server. RequestCanceled This API supports the product infrastructure and is not intended to be used directly from your code. The request was canceled, the WebRequest.Abort method was called, or an unclassifiable error occurred. This is the default value for Status. RequestProhibitedByCachePolicy This API supports the product infrastructure and is not intended to be used directly from your code. The request was not permitted by the cache policy. In general, this occurs when a request is not cacheable and the effective policy prohibits sending the request to the server. You might receive this status if a request method implies the presence of a request body, a request method requires direct interaction with the server, or a request contains a conditional header. RequestProhibitedByProxy This API supports the product infrastructure and is not intended to be used directly from your code. This request was not permitted by the proxy. SecureChannelFailure This API supports the product infrastructure and is not intended to be used directly from your code. An error occurred while establishing a connection using SSL. SendFailure This API supports the product infrastructure and is not intended to be used directly from your code. A complete request could not be sent to the remote server. ServerProtocolViolation This API supports the product infrastructure and is not intended to be used directly from your code. The server response was not a valid HTTP response. Success This API supports the product infrastructure and is not intended to be used directly from your code. No error was encountered. Timeout This API supports the product infrastructure and is not intended to be used directly from your code. No response was received during the time-out period for a request. TrustFailure This API supports the product infrastructure and is not intended to be used directly from your code. A server certificate could not be validated. UnknownError This API supports the product infrastructure and is not intended to be used directly from your code. An exception of unknown type has occurred. More details here: https://msdn.microsoft.com/en-us/library/system.net.webexceptionstatus(v=vs.110).aspx
I suggest you use FirebaseUI for authentication and get a reference to the uid. I also suggest you don't use "+" to concatenate Firebase paths. Try this instead: private DatabaseReference databaseReference; ... DatabaseReference databaseReference = FirebaseDatabase.getInstance().getReference().child(FirebaseAuth .getInstance().getCurrentUser().getUid()).getRef(); and add a listener like this: listener = new ValueEventListener() { @Override public void onDataChange(DataSnapshot dataSnapshot) { for (DataSnapshot snapshot : dataSnapshot.getChildren()) { // handle UserInformation here } } @Override public void onCancelled(DatabaseError databaseError) { } }; databaseReference.addValueEventListener(listener); The reason I prefer separating it instead of overriding it inside the method is that then I can do the following in onDestroy: @Override protected void onDestroy() { super.onDestroy(); databaseReference.removeEventListener(listener); } Keeping DatabaseReference and listener as member variables makes this easier. Also, be sure to set appropriate Firebase rules for hardening a users access to only their data: { "rules": { "$uid": { ".write": "$uid === auth.uid", ".read": "$uid === auth.uid", } } } For a more complete example, check out a todo list app I made which uses the same database organization pattern here. Hope I was able to help, feel free to comment if you have any other questions or if something was unclear.
DECLARE @KeyName SYSNAME = 'keyName' IF NOT EXISTS (SELECT * FROM sys.openkeys WHERE key_name = @KeyName) BEGIN OPEN SYMMETRIC KEY keyName DECRYPTION BY CERTIFICATE certificateName; END DECLARE @WhatToEncrypt VARCHAR(400) = 'Something To Encrypt can be binary or character' DECLARE @EncryptedBinary VARBINARY(MAX) SET @EncryptedBinary = ENCRYPTBYKEY(KEY_GUID(@KeyName),@WhatToEncrypt) DECLARE @DecryptedBinary VARBINARY(MAX) SET @DecryptedBinary = DECRYPTBYKEY(@EncryptedBinary) SELECT @WhatToEncrypt as Original, CAST(@DecryptedBinary AS VARCHAR(400)) as EncryptedThenDecrypted --may want to add some logic to see if it was open and leave it open CLOSE SYMMETRIC KEY keyName The encryption and decryption needs to be done SQL side NOT ASP.Net side when you are using DB Encryption technique as you described. So to implement in C# you would have to basically pass the applicable SQL statements just like you would execute a stored procedure or something. I would recommend having stored procedures to open and close the keys and then simply use the functions ENCRYPTBYKEY and DECRYPTBYKEY as you need to compare values etc. Also note that both encryption functions can also have validation data passed like a salt. ENCRYPTBYKEY- https://msdn.microsoft.com/en-us/library/ms174361.aspx DECRYPTBYKEY - https://msdn.microsoft.com/en-us/library/ms181860.aspx
1) What causes to scale the application and what type of security issues it faces even though it uses RSA 2048 encryption for communication? It makes the EP on the server side as a single point of failure and does not allow load balancing. About security issues, Andrew meant: This application may receive REST API calls and this forces one to provide additional security for this REST API calls and better use your first hybrid solutions using solely event feature. 2) Can we embed more then one SDK in standalone application and host in on the same server where kaa-node is present. No, you can't use more than one SDK in one application, but you can run a couple of instance on one machine in different directories in order to prevent collisions of autogenerated security keys and other files. 3) if device sends the notification response along with the telemetry data, can it increase the latency and any other performance issue. Of course, you will face some delays if start sending very frequently and big portions of data on both sides. If you have a lot of devices that sends in total a big amount of telemetry data, you can increase performance on the server side by start-up KAA in the cluster mode or add new nodes for processing requests. 4) Which one is the better approach to achieve request/response functionality? The second hybrid solution – data collection and notification features. This doesn't cause any problem with scale and you can easily launch Kaa server in cluster mode.
Your AuthenticationManager solution looks right to me, or you could go with an AtomicReference<String> or general Holder<String> class you write. Your best alternative is to make something like a @Named("access-token") String or custom-qualifier-annotated @AccessToken String available through the app, with a non-Singleton @Provides method that uses module state to always return the most current value, but that has a number of problems too: There's no natural setter here, unlike on your AuthenticationManager. Unless the current value is available through something else in the dependency graph that you can accept in your @Provides method, you're going to have to inject your Module or something that can access the Module's mutable fields. That doesn't sound easy to understand. Strings aren't mutable, so if you want an object that returns the latest value, you'll always want an @AccessToken Provider<String> and never a @AccessToken String. Dagger doesn't make it easy to make keys that can only inject providers, so unless you have full control over this codebase or can set up a static analysis check, this will be fragile and easily-misused. You have somewhat-more-limited control over the thread-safety and synchronization of the Dagger solution, whereas your own settable holder has semantics you can define yourself. In unit tests, if you want the value of the Provider to change without creating a custom for-testing Dagger component, you'll have to make a settable Provider class. This looks so much like AtomicReference, Holder, or your AuthenticationManager, you might as well start with one of those. As a final alternative, if you can represent the state of a Request as a short-lived an immutable object, you might prefer to create one of those with a deliberately-limited lifetime. In this way, you would use short-lived objects instead of Singletons and wouldn't have to worry about updating existing instances later. This might also have attractive retry semantics, if (for instance) you want retries to happen with the old access token but for new requests to be created with a new access token. If this option appeals to you, also look up Dagger subcomponents: you could create a new subcomponent with a new immutable Module for every request, and then have full access to your object graph including access to temporary access tokens and state as far deep as it is needed.
Passing Parameters The way you are doing it will not work, because the template only accepts the path to a file (same with file()). If you are looking to pass parameters to a PowerShell file, there is a much simpler option here. You should rename the PowerShell file to OurTemplatedPowerShell.ps1.erb (not that it is required, but it helps identify it better). Then in the file itself, you should add the following right near the top: $UsernameERB = '<%= @username %>' $PasswordERB = '<%= @password %>' if ($UsernameERB -ne $null -and $UsernameERB -ne '') { $Username = $UsernameERB } if ($PasswordERB -ne $null -and $PasswordERB -ne '') { $Password = $PasswordERB } In this way you can support running the script outside of Puppet and with Puppet with fewer changes. Now change your manifest to simply this: class OurCompany::server($username, $password) { exec { 'Change Service Credentials': command => template('OurCompany/OurTemplatedPowerShell.ps1.erb'), provider => powershell, logoutput => true, returns => [0, 1] } The values are passed to the bindings for the ERB automatically. It makes passing values through much simpler. If you need to see an example of this, take a look at https://github.com/puppetlabs/puppetlabs-chocolatey/blob/2862e058de0c28be363cb7df03aa5da31caae414/templates/InstallChocolatey.ps1.erb#L24 https://github.com/puppetlabs/puppetlabs-chocolatey/blob/2862e058de0c28be363cb7df03aa5da31caae414/manifests/install.pp#L11-L18 It looks like the values are magic, but they are passed through to ruby templates (ERB) based on Puppet variables, so $username (Puppet manifest) == @username (in ERB file). Encryption Your best bet for now is going to be this: https://github.com/TomPoulton/hiera-eyaml There is something similar upcoming from Puppet for the Puppet Data Provider, but that is not going to be production ready until probably Puppet 5. Store your password inside the hieradata like: username: username password: password and use the hiera-eyaml to encrypt it. Look it up inside your manifest like: class OurCompany::server($username = hiera('username'), $password = hiera('password')) { You can also use automatic parameter lookup/automatic data bindings if you are only going to use the username and password inside this manifest: # hieradata --- OurCompany::server::username: username OurCompany::server::password: password Then you don't have to use hiera functions inside your manifest.
To setup security, you need to define security role, security constraint, authentication method as well as the application binding. You only mentioned the application binding part. I am not sure if you have done the rest. You can refer to this documentation on how to setup the rest: http://www.ibm.com/support/knowledgecenter/SS7K4U_liberty/com.ibm.websphere.wlp.zseries.doc/ae/twlp_sec_quickstart.html For your application, do you want to deploy as an EAR or deploy as a standalone WAR? On the dynamic web project structure, you seems to be using an EAR. However, on the application binding config element that you have, it has been converted to a standalone WAR application without the EAR. I would suggest you to keep the original enterpriseApplciation element and just add the application-bnd section under that element instead of defining a new application element. The reason is changing the application type directly on the server config file will make the tools setup out of sync with the server configuration. If you want to deploy as a standalone WAR instead without the EAR, remove the EAR from the Server on the Servers view and add the WAR to the server first. Then, you can add the application-bnd section under the WAR definition to keep tools and config settings in sync.
Ok, so my code was pretty wrong, as I wanted to encrypt with the private key, but used the ImportPublicKey function, the correct way should be; public static string Encrypt(string str, string key) { try { key = key.Replace(Environment.NewLine, ""); IBuffer keyBuffer = CryptographicBuffer.DecodeFromBase64String(key); AsymmetricKeyAlgorithmProvider provider = AsymmetricKeyAlgorithmProvider.OpenAlgorithm(AsymmetricAlgorithmNames.RsaPkcs1); var keyPar = provider.ImportKeyPair(keyBuffer, CryptographicPrivateKeyBlobType.Pkcs1RsaPrivateKey); //CryptographicKey publicKey = provider.ImportPublicKey(keyBuffer, CryptographicPublicKeyBlobType.Pkcs1RsaPublicKey); IBuffer dataBuffer = CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(str)); var encryptedData = CryptographicEngine.Encrypt(keyPar, dataBuffer, null); var encryptedStr = CryptographicBuffer.EncodeToBase64String(encryptedData); var signature = CryptographicEngine.Sign(keyPar, dataBuffer); var signatureStr = CryptographicBuffer.EncodeToBase64String(signature); return encryptedStr; } catch (Exception e) { throw; return "Error in Encryption:With RSA "; } } and this works to encrypt the string using the RSA private key. However, when I try to decrypt using the public key, using following similar method; public static string Decrypt(string str, string key) { try { key = key.Replace(Environment.NewLine, ""); IBuffer keyBuffer = CryptographicBuffer.DecodeFromBase64String(key); AsymmetricKeyAlgorithmProvider provider = AsymmetricKeyAlgorithmProvider.OpenAlgorithm(AsymmetricAlgorithmNames.RsaSignPkcs1Sha256); CryptographicKey publicKey = provider.ImportPublicKey(keyBuffer, CryptographicPublicKeyBlobType.X509SubjectPublicKeyInfo); IBuffer dataBuffer = CryptographicBuffer.CreateFromByteArray(Encoding.UTF8.GetBytes(str)); var encryptedData = CryptographicEngine.Decrypt(publicKey, dataBuffer, null); return CryptographicBuffer.EncodeToBase64String(encryptedData); } catch (Exception e) { throw; return "Error in Decryption:With RSA "; } } I'm getting an Method or operation not implemented exception, either there is still something wrong, or the private-encrypt/public-decrypt method is not yet there in UWP. What I ended doing is to get the nuget package Portable.BouncyCastle-Signed and follow the code snippet from this answer: C# BouncyCastle - RSA Encryption with Public/Private keys Works like a sharm.
I've gone through Google and found the the problem is from an exploit using the plugin uploader in an eccomerce webapp. The attacker is essencially able the upload a php file using the site's mass uploader script that converts the file to zip and then installs it onto the server. https://www.exploit-db.com/exploits/35052/ I would first start looking for the site that is hosting the magento site and scan it for the problem direcrory. Then take the appropriate steps to eradicate it. The disable the software's ability to upload plugin through the Web interface and make them upload the plugin they want manually through sftp. Next step is to change all the passwords to everything across everybody's accounts just in case the attacker was able to glean that intel. ---------------------------@link-exploit Exploit found date: 10/24/2014 Security Researcher name: Parvinder. Bhasin Contact info: [email protected] twitter: @parvinderb - scorpio Currently tested version: Magento version: Magento CE - 1.8 older MAGMI version: v0.7.17a older Download software link: Magento server: http://www.magentocommerce.com/download MAGMI Plugin: https://sourceforge.net/projects/magmi/files/magmi-0.7/plugins/packages/ MAGMI (MAGento Mass Importer) suffers from File inclusion vulnerability (RFI) which allows an attacker to upload essentially any PHP file (without any sanity checks). This PHP file could then be used to skim credit card data, rewrite files, run remote commands, delete files..etc. Essentially, this gives attacker ability to execute remote commands on the vulnerable server. Steps to reproduce: http:///magmi/web/magmi.php Under upload new plugins: click on "choose file" MAGento plugins are basically php file zipped. So create a php shell and zip the file. ex: evil.php ex: zip file: evil_plugin.zip. After the file has been uploaded, it will say: Plugin packaged installed. evil.php: Your malicious evil.php file is extracted now. All you then need to do is just access the evil.php page from: http:///magmi/plugins/evil.php At this point you could really have access to the entire system. Download any malware, install rootkits, skim credit card data ..etc.etc.
You found the issue with your code, but I am answering your other question which is: How I add an exception only for that reset password? Which I am changing to: Should you authenticate_user! within ApplicationController? Obviously, since I am swithcing up the question, the answer is no. Here is why: No application requires authentication at all times. If you need to login, that means the application does not require authentication at all times. If you can "Forget your password", then you do no require authentication at all times. I've made this same mistake too... The authenticate_user! is not designed for ApplicationController, because ApplicationController is every controller. That said, it is designed for restricting access to controllers, yes, but not ApplicationController specifically. The reason you have to add this in the first place is for example, only letting logged in users, edit their own articles. You wouldn't anyone to edit articles, but you do want anyone to see articles. I don't know anything about your app, but typically authentication is a case per case thing, with very particular exceptions. So even if you have to paste: authenticate_user! many times, that's okay. Theres also another method (which is why I was asking for your routes.rb). https://github.com/plataformatec/devise/wiki/How-To:-Define-resource-actions-that-require-authentication-using-routes.rb P.S. The answer to your first question: before_action :authenticate_user!, unless: 'params[:controller] == "devise_passwords"' or before_action :authenticate_user!, unless: 'params[:controller] == "passwords"' I forget the string representation for this controller. But again, this is bad and does not scale well as your application grows