text
stringlengths 64
2.99M
|
---|
After some research, I believe this to be a bug. I have filed GRIZZLY-1877.
Update:
GRIZZLY-1877 has been resolved and version 2.3.30 is available for download and in maven central.
As a result, the workaround below is no longer necessary. Simply implementing SessionManager#getSessionCookieName() fixes the situation.
Old workaround:
In the meantime (or if v2.3.30 is not an option), I have a workaround, using Jersey's ContainerRequestFilter to set the session cookie name for each Request:
import org.glassfish.grizzly.http.server.Request;
/**
* Until the session cookie can be defined in the Grizzly {@link HttpServer},
* it will be set here.
* <p>
* The filter's priority ensures it gets executed before filters with
* {@link Priorities#AUTHENTICATION}.
*
* @author hank
*/
@Provider
@Priority(300) // less than 1000
@PreMatching
public class SessionCookieFilter implements ContainerRequestFilter {
@Inject
javax.inject.Provider<Request> requestProvider;
@Inject
Config config;
@Override
public void filter(ContainerRequestContext requestContext) throws IOException {
Request request = requestProvider.get();
request.setSessionCookieName(config.getSessionCookieName());
}
}
|
My solution is to create an AuthController class as following code:
<?php
namespace App\Http\ApiControllers\V1;
use App\Http\Controllers\Controller;
use Dingo\Api\Routing\Helpers;
class BaseController extends Controller
{
use Helpers;
}
AuthController
<?php
/**
* Created by PhpStorm.
* User: ***
* Date: 26/10/2016
* Time: 14:07
*/
namespace App\Http\ApiControllers\V1;
use App\Http\Requests\AddUserRequest;
use App\Http\Transformer\UserTransformer;
use Illuminate\Http\Request;
use JWTAuth;
use Tymon\JWTAuth\Exceptions\JWTException;
use Tymon\JWTAuth\Exceptions\TokenExpiredException;
use Tymon\JWTAuth\Exceptions\TokenInvalidException;
class AuthController extends BaseController
{
public function authenticate(Request $request)
{
// grab credentials from the request
$credentials = $request->only('email', 'password');
try {
// attempt to verify the credentials and create a token for the user
if (!$token = JWTAuth::attempt($credentials)) {
// return response()->json(['error' => 'invalid_credentials'], 401);
//return response()->json(['error' => '用户名或密码错误'], 401);
return $this->response->error('用户名或密码错误', 401);
}
} catch (JWTException $e) {
// something went wrong whilst attempting to encode the token
// return response()->json(['error' => 'could_not_create_token'], 500);
// return response()->json(['error' => '创建 token 失败'], 500);
return $this->response->error('创建 token 失败', 500);
}
// all good so return the token
return response()->json(compact('token'));
// return $this->response->item($token);
}
public function getAuthenticatedUser()
{
try {
if (!$user = JWTAuth::parseToken()->authenticate()) {
return $this->response->errorNotFound('没有此用户');
}
} catch (TokenExpiredException $e) {
return $this->response->errorUnauthorized('token_expired');
} catch (TokenInvalidException $e) {
return $this->response->errorBadRequest('token_invalid');
} catch (JWTException $e) {
return $this->response->errorInternal('token_absent');
}
return $this->response->item($user,new UserTransformer());
}
}
Then ,you can customise the error message as you like . For more info, you can refer to https://github.com/tymondesigns/jwt-auth/wiki/Authentication |
I encountered the same problem with my iOS app, and also checked out the similar question Can I use Google Drive SDK with sign in information from Google Sign In SDK in iOS. Drawing on the answer from Eran Marom, I was able to turn my Google Sign In credential into an OAuth2 credential, which I used to successfully access the Apps Script Execute API.
I worked in Swift.
In the App Delegate:
import GTMOAuth2
@UIApplicationMain
class AppDelegate: UIResponder, UIApplicationDelegate, GIDSignInDelegate {
var window: UIWindow?
//Create an authorization fetcher, which will be used to pass credentials on to the API request
var myAuth: GTMFetcherAuthorizationProtocol? = nil
// [START didfinishlaunching]
func application(application: UIApplication,
didFinishLaunchingWithOptions launchOptions: [NSObject: AnyObject]?) -> Bool {
// Initialize sign-in
var configureError: NSError?
GGLContext.sharedInstance().configureWithError(&configureError)
assert(configureError == nil, "Error configuring Google services: \(configureError)")
GIDSignIn.sharedInstance().delegate = self
let scopes = "https://www.googleapis.com/auth/drive"
GIDSignIn.sharedInstance().scopes.append(scopes)
return true
}
//....
func signIn(signIn: GIDSignIn!, didSignInForUser user: GIDGoogleUser!,
withError error: NSError!) {
if (error == nil) {
//sets credentials in fetcher
myAuth = user.authentication.fetcherAuthorizer()
//...
} else {
}
//....
In the ViewController:
import UIKit
import GoogleAPIClient
import GTMOAuth2
@objc(ViewController)
class ViewController: UITableViewController, GIDSignInUIDelegate {
private let kClientID = "CLIENT ID"
private let kScriptId = "SCRIPT ID"
private let service = GTLService()
override func viewDidLoad() {
super.viewDidLoad()
GIDSignIn.sharedInstance().uiDelegate = self
//...
}
func toggleAuthUI() {
if (GIDSignIn.sharedInstance().hasAuthInKeychain()){
self.service.authorizer = appDelegate.myAuth
//...
callAppsScript()
} else {
//...
}
@objc func receiveToggleAuthUINotification(notification: NSNotification) {
if (notification.name == "ToggleAuthUINotification") {
self.toggleAuthUI()
if notification.userInfo != nil {
let userInfo:Dictionary<String,String!> =
notification.userInfo as! Dictionary<String,String!>
self.statusText.text = userInfo["statusText"]
}
}
}
func callAppsScript() {
let baseUrl = "https://script.googleapis.com/v1/scripts/\(kScriptId):run"
let url = GTLUtilities.URLWithString(baseUrl, queryParameters: nil)
// Create an execution request object.
var request = GTLObject()
request.setJSONValue("APPS_SCRIPT_FUCTION", forKey: "function")
// Make the API request.
service.fetchObjectByInsertingObject(request,
forURL: url,
delegate: self,
didFinishSelector: "displayResultWithTicket:finishedWithObject:error:")
}
func displayResultWithTicket(ticket: GTLServiceTicket,
finishedWithObject object : GTLObject,
error : NSError?) {
//Display results...
}
|
There's one OAuth 2.0 flow that does not require any kind of human intervention; the client credentials grant. As illustrated in the image there is no separate resource owner involved in the flow, because the client application acts on its own behalf.
(source: API Auth: Client Credentials Grant)
The most common scenario for this is when the client application wants to only access resources under its control, although the specification also mentions that it could in theory be requesting access to resources under the control of another resource owner; a real user.
The client can request an access token using only its client credentials (or other supported means of authentication) when the client is requesting access to the protected resources under its control, or those of another resource owner that have been previously arranged with the authorization server (the method of which is beyond the scope of this specification).
(emphasis is mine, source: section 4.4 of OAuth2 RFC)
Another possibility is to require a one-time interaction with the user and then use refresh tokens to be able to continue to perform requests on behalf the user without further interactions; either forever or until the user revokes that access. The authorization code grant with a client application that can use refresh tokens would be suitable for this. |
I have finally found the solution and below is the approach
First we need get to LWSSO_COOKIE_KEY,QCSession,ALM_USER,XSRF_TOKEN values from the ALM Authentication link then we should use the values for subsequent calls
Below is the complete working code to get the list of defects by entering ALM Credentials
<?php
$curl = curl_init();
Header('Content-type: application/json');
$credentials = "username:password";
curl_setopt_array($curl, array(
CURLOPT_URL => "https://host:port/qcbin/api/authentication/sign-in",
CURLOPT_ENCODING => "",
CURLOPT_MAXREDIRS => 10,
CURLOPT_TIMEOUT => 30,
CURLOPT_HEADER => 1,
CURLOPT_RETURNTRANSFER => 1,
CURLOPT_SSL_VERIFYHOST => 0,
CURLOPT_SSL_VERIFYPEER => 0,
CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
CURLOPT_CUSTOMREQUEST => "GET",
CURLOPT_HTTPHEADER => array(
"authorization: Basic " . base64_encode($credentials) ,
"cache-control: no-cache"
) ,
));
$response = curl_exec($curl);
$err = curl_error($curl);
curl_close($curl);
if ($err)
{
echo "cURL Error #:" . $err;
}
else
{
// If there is no error then get the response to form the array of headers to get the different values required
$array_start = explode(';', $response);
foreach ($array_start as $key => $value) {
$remove_from_string = ['HTTP/1.1 200 OK','Path=/','HTTPOnly','HttpOnly','Content-Length',': 0'];
$replace_array = ['','','','','',''];
$value = str_replace($remove_from_string,$replace_array,$value);
$value = trim(preg_replace(('/Expires: [a-zA-Z]+, [0-9]+ [a-zA-Z]+ [0-9]+ [0-9]+:[0-9]+:[0-9]+ [a-zA-Z]+/'), '', $value));
$value = trim(preg_replace(('/Server: [a-zA-Z0-9.\(\)]+/'),'',$value));
if (!empty($value)) {
$almheaders[trim(explode('=',$value)[0])] = explode('=',$value)[1];
}
}
$LWSSO_COOKIE_KEY = $almheaders['Set-Cookie: LWSSO_COOKIE_KEY'];
$QCSession = $almheaders['Set-Cookie: QCSession'];
$ALM_USER = $almheaders['Set-Cookie: ALM_USER'];
$XSRF_TOKEN = $almheaders['Set-Cookie: XSRF-TOKEN'];
// Now form the Cookie value from the above values.
$cookie = "Cookie: JSESSIONID=33eyr1y736486zcnl0vtmo12;XSRF-TOKEN=$XSRF_TOKEN;QCSession=$QCSession;ALM_USER=$ALM_USER;LWSSO_COOKIE_KEY=$LWSSO_COOKIE_KEY";
// echo $cookie;
$curl = curl_init();
Header('Content-type: application/json');
curl_setopt_array($curl, array(
CURLOPT_URL => "https://host:port/qcbin/api/domains/CET_NTD/projects/BILLING_OPERATIONS/defects",
// CURLOPT_RETURNTRANSFER => true,
CURLOPT_ENCODING => "",
CURLOPT_MAXREDIRS => 10,
CURLOPT_TIMEOUT => 30,
CURLOPT_HEADER => 0,
CURLOPT_RETURNTRANSFER => 1,
CURLOPT_SSL_VERIFYHOST => 0,
CURLOPT_SSL_VERIFYPEER => 0,
CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
CURLOPT_CUSTOMREQUEST => "GET",
CURLOPT_HTTPHEADER => array(
"authorization: Basic " . base64_encode($credentials) ,
"cache-control: no-cache",
"Accept: application/json",
$cookie
) ,
));
$response = curl_exec($curl);
$err = curl_error($curl);
curl_close($curl);
if ($err)
{
echo "cURL Error #:" . $err;
}
else
{
echo $response;
}
}
?>
|
By default, AuthorizeAttribute class is part of System.Web.Mvc namespace (see Github repository: aspnetwebstack). The method leads to login redirection there is HandleUnauthorizedRequest:
protected virtual void HandleUnauthorizedRequest(AuthorizationContext filterContext)
{
// Returns HTTP 401 - see comment in HttpUnauthorizedResult.cs.
filterContext.Result = new HttpUnauthorizedResult();
}
HTTP 401 status code response from method above will trigger FormsAuthenticationModule (see reference below), where OnLeave method redirects to login URL with FormsAuthentication.ReturnUrlVar property included:
strRedirect = loginUrl + "?" + FormsAuthentication.ReturnUrlVar + "=" + HttpUtility.UrlEncode(strUrl, context.Request.ContentEncoding);
// Do the redirect
context.Response.Redirect(strRedirect, false);
To override this behavior (including remove ReturnUrl part), create an authorization class extends from AuthorizeAttribute class, e.g. (this is an example implementation):
using System.Web.Mvc;
using System.Web.Routing;
[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, Inherited = true, AllowMultiple = true)]
public class CustomAuthorizeAttribute : AuthorizeAttribute
{
// @Override
protected override void HandleUnauthorizedRequest(AuthorizationContext filterContext)
{
if (!filterContext.HttpContext.Request.IsAuthenticated)
{
filterContext.Result = new RedirectToRouteResult(new RouteValueDictionary(
new { controller = "Account",
action = "Login"
}));
}
else
{
base.HandleUnauthorizedRequest(filterContext);
}
}
}
Then, you may implement custom authorization attribute like this one:
[CustomAuthorizeAttribute]
public ActionResult UserPage()
{
return View();
}
NB: Use AuthorizeAttribute on all pages that requires user login authentication, for login page use AllowAnonymousAttribute instead.
Related references:
System.Web.Security.FormsAuthenticationModule (MS Github reference)
What initially sets the ReturnUrl parameter when using AuthorizeAttribute
Generate a return Url with a custom AuthorizeAttribute
How to remove returnurl from url? |
Well you need to consider several factors such as:
Authenticating the API. Your API should be called by valid users that are authorized and authenticated
Caching API results. Your API should cache the results of API call. This will allow your API to handle requests more quickly, and it will be able to handle more requests per second. Memcache can be used to cache results of API call
The API architecture. RESTFul APIs have less overhead as compared to SOAP based APIs. SOAP based APIs have better support for authentication. They are also better structured then RESTFul APIs.
API documentation. Your API should be well documented and easy for users to understand.
API scope. Your API should have a well defined scope. For example will it be used over the internet as a public API or will it be used as private API inside corporate intranet.
Device support. When designing your API you should keep in mind the devices that will consume your API. For example smart phones, desktop application, browser based application, server application etc
API output format. When designing your API you should keep in mind the format of the output. For example will the output contain user interface related data or just plain data. One popular approach is known as separation of concerns (https://en.wikipedia.org/wiki/Separation_of_concerns). For example separating the backend and frontend logic.
Rate limiting and throttling. Your API should implement rate limiting and throttling to prevent overuse and misuse of the API.
API versioning and backward compatibility. Your API should be carefully versioned. For example if you update your API, then the new version of your API should support older version of API clients. Your API should continue to support the old API clients until all the API clients have migrated to the new version of your API.
API pricing and monitoring. The usage of your API should be monitored, so you know who is using your API and how it is being used. You may also charge users for using your API.
Metric for success. You should also decide which metric to use for measuring the success of your API. For example number of API calls per second or monitory earnings from your API. Development activities such as research, publication of articles, open source code, participation in online forums etc may also be considered when determining the success of your API.
Estimation of cost involved. You should also calculate the cost of developing and deploying your API. For example how much time it will take you to produce a usable version of your API. How much of your development time the API takes etc.
Updating your API. You should also decide how often to update your API. For example how often should new features be added. You should also keep in mind the backward compatibility of your API, so updating your API should not negatively affect your clients.
|
Finally figured out the solution.
I have a javascript that generates the auth0 token. Once the token is generated I use that token and set that to browser cookies along with user credential. This way when I hit the application url which I want to test, the auth0 browser specific authentication prompt isn't displayed.
Below is the code for the same:
var request = require('request');
this.performAuthoLogin = function() {
var defer = protractor.promise.defer();
var credentials = {
"client_id": clientId,
"username": userName,
"password": password,
"id_token": "",
"connection": connectionName,
"grant_type": "password",
"scope": "openid",
"device": "api"
}
request({
url: url,
method: 'POST',
json: true,
body: credentials,
headers: {
'Content-Type': 'application/json'
}
}, function(error, response, body) {
if (error) {
defer.reject(error);
} else {
authTokenId = body.id_token;
console.log(authTokenId);
var profile = {
username: userNameToLogin
email: emailId
}
browser.manage().addCookie("profile", profile, '/', applicationUrl)
browser.manage().addCookie("id_token", authTokenId, '/', applicationUrl);
defer.fulfill(body);
}
});
return defer.promise;
};
|
How to seek in CTR mode and decrypt part of the stream?
Using a Crypto++ Pipeline is a tad bit awkward because Discard or Skip on a Source does not work as expected. You have to Pump data into "nothing" under the current implementation. Also see Skip'ing on a Source does not work as expected on Stack Overflow.
Below is an example of using AES/CTR and seeking in the stream. It needs to perform a "two part" seek. First, it discards bytes on the Source called cipher. Second, it seeks in the keystream on the encryption object called enc to synchronize the counter. Once the seek is performed, the remainder of the cipher text is decrypted by calling PumpAll(), which pumps the remainder of the data through the pipeline.
#include "modes.h"
#include "aes.h"
using namespace CryptoPP;
int main(int argc, char* argv[])
{
string plain = "Now is the time for all good men to come to the aide of their country";
byte key[AES::DEFAULT_KEYLENGTH] = {0};
byte nonce[AES::BLOCKSIZE] = {0};
CTR_Mode<AES>::Encryption enc;
enc.SetKeyWithIV(key, sizeof(key), nonce, sizeof(nonce));
string cipher;
StringSource ss1(plain, true, new StreamTransformationFilter(enc, new StringSink(cipher)));
for(size_t i=0; i<cipher.size(); i++)
{
CTR_Mode<AES>::Decryption dec;
dec.SetKeyWithIV(key, sizeof(key), nonce, sizeof(nonce));
StringSource ss2(cipher, false);
ss2.Pump(i);
dec.Seek(i);
string recover;
StreamTransformationFilter stf(dec, new StringSink(recover));
// Attach the decryption filter after seeking
ss2.Attach(new Redirector(stf));
ss2.PumpAll();
cout << i << ": " << recover << endl;
}
return 0;
}
Here is the result:
$ ./test.exe
0: Now is the time for all good men to come to the aide of their country
1: ow is the time for all good men to come to the aide of their country
2: w is the time for all good men to come to the aide of their country
3: is the time for all good men to come to the aide of their country
4: is the time for all good men to come to the aide of their country
5: s the time for all good men to come to the aide of their country
6: the time for all good men to come to the aide of their country
7: the time for all good men to come to the aide of their country
8: he time for all good men to come to the aide of their country
9: e time for all good men to come to the aide of their country
10: time for all good men to come to the aide of their country
11: time for all good men to come to the aide of their country
12: ime for all good men to come to the aide of their country
13: me for all good men to come to the aide of their country
14: e for all good men to come to the aide of their country
15: for all good men to come to the aide of their country
16: for all good men to come to the aide of their country
17: or all good men to come to the aide of their country
18: r all good men to come to the aide of their country
19: all good men to come to the aide of their country
20: all good men to come to the aide of their country
21: ll good men to come to the aide of their country
22: l good men to come to the aide of their country
23: good men to come to the aide of their country
24: good men to come to the aide of their country
25: ood men to come to the aide of their country
26: od men to come to the aide of their country
27: d men to come to the aide of their country
28: men to come to the aide of their country
29: men to come to the aide of their country
30: en to come to the aide of their country
31: n to come to the aide of their country
32: to come to the aide of their country
33: to come to the aide of their country
34: o come to the aide of their country
35: come to the aide of their country
36: come to the aide of their country
37: ome to the aide of their country
38: me to the aide of their country
39: e to the aide of their country
40: to the aide of their country
41: to the aide of their country
42: o the aide of their country
43: the aide of their country
44: the aide of their country
45: he aide of their country
46: e aide of their country
47: aide of their country
48: aide of their country
49: ide of their country
50: de of their country
51: e of their country
52: of their country
53: of their country
54: f their country
55: their country
56: their country
57: heir country
58: eir country
59: ir country
60: r country
61: country
62: country
63: ountry
64: untry
65: ntry
66: try
67: ry
68: y
Now that you've seen the general pattern, here are the modifications for your dataset using the range [5,10].
You do not have to call stf.MessageEnd() because recovered text is ready as soon as the XOR is preformed. Others modes may need the call to MessageEnd(). Also see Init-Update-Final on the Crypto++ wiki.
StringSource ss2(cipher, false);
ss2.Pump(5);
dec.Seek(5);
string recover;
StreamTransformationFilter stf(dec, new StringSink(recover));
// Attach the decryption filter after seeking
ss2.Attach(new Redirector(stf));
ss2.Pump(10 - 5 + 1);
cout << "'" << recover << "'" << endl;
It produces:
$ ./test.exe
's the '
And here's a little more:
StringSource ss2(cipher, false);
ss2.Pump(5);
dec.Seek(5);
string recover;
StreamTransformationFilter stf(dec, new StringSink(recover));
// Attach the decryption filter after seeking
ss2.Attach(new Redirector(stf));
ss2.Pump(10 - 5 + 1);
cout << "'" << recover << "'" << endl;
ss2.Pump(1);
cout << "'" << recover << "'" << endl;
ss2.Pump(1);
cout << "'" << recover << "'" << endl;
It produces:
$ ./test.exe
's the '
's the t'
's the ti'
Earlier I said "Using a Crypto++ Pipeline is a tad bit awkward". Here's all we want to do, but we can't at the moment:
StringSource ss(cipher, false, new StreamTransformationFilter(dec, new StringSink(x)));
ss.Skip(5); // Discard bytes and synchronize stream
ss.Pump(5); // Process bytes [5,10]
cout << x << endl;
Regarding Rob's comment "You must decrypt an entire 16-byte block..." - If you were working with another mode, like CBC mode, then you would have to process preceding plain text or cipher text; and you would have to operate on blocks. CBC mode and its chaining properties demand it.
However, CTR is designed a little differently. Its designed to be seekable, and it allows you to jump around in the stream. In this respect, its a lot like OFB mode. (CTR mode and OFB mode differ in the way they generate the keystream. But both XOR the keystream with the plain text or cipher text). |
Taking a look at the Conversation API doc it does look possible.
Applications can also use tokens to establish authenticated communications with Watson services without embedding their service credentials in every call. You write an authentication proxy in Bluemix to obtain a token for your client application, which can then use the token to call the service directly. You use your service credentials to obtain a token for that service...
There is one extra step. You will have to use the Authorization Service to generate a watson auth token on your server side.
Then you can have your client side use that token either with the header: X-Watson-Authorization-Token or as a query param with the key named watson-token to make requests directly to the conversation service.
The answer I provided Here may also help you as it has some working sample code for the Watson Tone Analyzer Service that does what I mentioned above with a php server used to generate the Watson auth token. You will have to substitute the conversation urls in place of the Tone Analyzer ones. |
It looks like all I needed to do was to put the app.UseOpenIdConnectAuthentication code before the other authentication options. This allows the Account/Login form to be displayed by default and also displays the OpenId button to allow that option.
public void ConfigureAuth(IAppBuilder app)
{
app.CreatePerOwinContext(ApplicationDbContext.Create);
app.CreatePerOwinContext<ApplicationUserManager>(ApplicationUserManager.Create);
app.CreatePerOwinContext<ApplicationSignInManager>(ApplicationSignInManager.Create);
app.UseExternalSignInCookie(DefaultAuthenticationTypes.ExternalCookie);
app.UseOpenIdConnectAuthentication(
new OpenIdConnectAuthenticationOptions
{
ClientId = clientId,
MetadataAddress = metadataAddress,
RedirectUri = redirectUri,
//PostLogoutRedirectUri = postLogoutRedirectUri
});
app.UseCookieAuthentication(new CookieAuthenticationOptions
{
AuthenticationType = DefaultAuthenticationTypes.ApplicationCookie,
LoginPath = new PathString("/Account/Login"),
Provider = new CookieAuthenticationProvider
{
OnValidateIdentity = SecurityStampValidator.OnValidateIdentity<ApplicationUserManager, ApplicationUser>(
validateInterval: TimeSpan.FromMinutes(30),
regenerateIdentity: (manager, user) => user.GenerateUserIdentityAsync(manager))
}
});
app.UseTwoFactorSignInCookie(DefaultAuthenticationTypes.TwoFactorCookie, TimeSpan.FromMinutes(5));
app.UseTwoFactorRememberBrowserCookie(DefaultAuthenticationTypes.TwoFactorRememberBrowserCookie);
}
|
Regarding "JMeter is not a browser". It is really not a browser, but it may act like a browser given proper configuration, so make sure you:
add HTTP Cookie Manager to your Test Plan to represent browser cookies and deal with cookie-based authentication
add HTTP Header Manager to send the appropriate headers
configure HTTP Request samplers via HTTP Request Defaults to
Retrieve all embedded resources
Use thread pool of around 5 concurrent threads to do it
Add HTTP Cache Manager to represent browser cache (i.e. embedded resources retrieved only once per virtual user per iteration)
if your application is build on AJAX - you need to mimic AJAX requests with JMeter as well
Regarding "rendering", for example you detect that your application renders slowly on a certain browser and there is nothing you can do by tuning the application. What's next? You will be developing a patch or raising an issue to browser developers? I would recommend focus on areas you can control, and rendering DOM by a browser is not something you can.
If you still need these client-side metrics for any reason you can consider using WebDriver Sampler along with main JMeter load test so real browser metrics can also be added to the final report. You can even use Navigation API to collect the exact timings and add them to the load test report
See Using Selenium with JMeter's WebDriver Sampler to get started.
There are multiple options for tracking your application performance between builds (and JMeter tests executions), i.e.
JChav - JMeter Chart History And Visualisation - a standalone tool
Jenkins Performance Plugin - a Continuous Integration solution
|
A long time ago you question was done, but I was stucked in the same question without solution, so I decided to explain how to resolve this.
You are allowed to edit the FilterChain, you may put one ExceptionTranslationFilter before your PreAuthenticationFilter as ...
http
.csrf().disable()
.addFilter(myPreAuthenticationFilter)
.addFilterBefore(new ExceptionTranslationFilter(
new Http403ForbiddenEntryPoint()),
myPreAuthenticationFilter.getClass()
)
.authorizeRequests()
.antMatchers(authSecuredUrls).authenticated()
.anyRequest().permitAll()
.and()
.httpBasic()
.authenticationEntryPoint(new Http403ForbiddenEntryPoint())
;
Your AbstractPreAuthenticatedProcessingFilter getPreAuthenticatedCredentials or getPreAuthenticatedPrincipal methods may throw AuthenticationException if any credential is not provided as ...
if ( !StringUtils.hasText(request.getParameter("MY_TOKEN")) ) {
throw new BadCredentialsException("No pre-authenticated principal found in request");
}
Or your AuthenticationManager authenticate method can throw AuthenticationException too. |
Is there a method to get all saved fingerprints that are available on the device?
No.
Is it possible to see which one of the fingerprints on the device were used to unlock
No.
However, there are some limitations to which fingerprints can be used to authenticate within your app. The result of a fingerprint authentication is that you make a cryptographic key available to perform some cryptographic operation (e.g. creating a digital signature). So when you add a user in your app you'd typically create a cryptographic key that you associate with that user. Then later on when the user wants to perform some action that requires him/her to be authenticated, you do the fingerprint authentication, which gives you access to the key, which use can use to do whatever it is that you need to do to verify that the user should be allowed to perform the action.
What happens when a new fingerprint is enrolled is that any existing cryptographic keys that require fingerprint authentication will be permanently invalidated.
That leaves us with the scenario where there are multiple enrolled fingerprint before the user is added in your app. I'm not aware of any way to do anything about this with the current APIs. So the best you can do might be to add some step in your fingerprint-enabling UI flow where the user is asked to verify that only they have enrolled a fingerprint on the device (e.g. by checking a checkbox or clicking a button). |
I would run the request page on a separate server in your DMZ functioning as proxy to the internal application server. Here is a brief description:
The php script for the request url (=request page) needs to be accessible to the public internet, so that Slack can call it. I would put it on a separate server and I would put that server in the DMZ of your company. That is usually the best place for servers that need to be accessible from the outside, but also need to access servers on the inside of your company. Make sure to use SSL and the verification token to secure your calls from Slack.
The request page can run on a small server and will need to have a webserver (e.g. apache) and php. If you planning to have more complex requests you may also need a database. It will also need to run SSL, so you will need a certificate. You can also use your existing webserver to the outside (example.com) if is meets these requirements.
The request page needs to have access to your application server, e.g. via VPN. It would need to function as proxy: receive the request from Slack, make requests to the application server based on the specifics of the slash command and then return the info back to Slack.
Another important point is user authentication. I read from your question that not all users on your Slack team should have access to the application server, so your request script needs to have a method to distinguish which users are allowed access and which are not. It would be easiest, if these users could be identified by membership of a specific Slack group. In any case you probably would need an additional bot that ensures mapping of Slack users to VPN users.
|
buf is not large enough for this statement:
sprintf(buf, "#include <stdio.h>\n#include <time.h>\n#include <stdlib.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <math.h>\n#include <unistd.h>\n#define BILLION 1000000000L\nint main()\n{\nchar buf[500];\nint numberOfElements = %i;\nint currentTest = %i;\nint randomArray[numberOfElements];\nint minIndex;\nint minValue;\nstruct timespec requestStart;\nstruct timespec requestEnd;\nlong int recordStartTime;\nlong int recordEndTime;\nlong int elapsedTime;\nFILE *arrangedArray;\nFILE *stopwatch;\nsprintf(buf,\"C:/Users/Erlandas/Desktop/Research/C/TestNo%%i/ProgramNo3/ProgramNo3Stopwatch.txt\", currentTest);\nstopwatch = fopen(buf, \"a+\");\nstruct stat st = {0};\nsprintf(buf, \"C:/Users/Erlandas/Desktop/Research/C/TestNo%%i/ProgramNo3/\", currentTest);\nif (stat(buf, &st) == -1)\n{\nsprintf(buf, \"C:/Users/Erlandas/Desktop/Research/C/TestNo%%i/ProgramNo3/\", currentTest);\nmkdir(buf);\n}\nsprintf(buf, \"C:/Users/Erlandas/Desktop/Research/C/TestNo%%i/CodeForNo3/SampleNo%i/ArrangedArray.txt\", currentTest, currentSample);\narrangedArray = fopen(buf, \"w+\");\n", numberOfElements, currentTest, currentSample);
You should make buf much larger and use snprintf() to avoid buffer overflows.
You should break all these strings into fragments that fit on regular lines:
sprintf(buf, "clock_gettime(CLOCK_MONOTONIC, &requestEnd);\n"
"recordEndTime = (requestEnd.tv_nsec + requestEnd.tv_sec * BILLION);\n"
"elapsedTime = recordEndTime - recordStartTime;\n"
"sprintf(buf, \"%%li\\n\", elapsedTime);\n"
"fputs(buf, stopwatch);\n");
fputs(buf, codeOutput[currentSample]);
But note however that you do not need sprintf() at all for some of these: since you are not substituting any variables, so you could just call fputs with the string directly.
Furthermore, the way you test for end of file is incorrect: while (buf[0] != EOF) cannot test if you have reached the end of file, checking the return value of fgets() is the correct way to do this:
while (fgets(buf, 500, arrayAssignmentReader[currentSample]) != NULL) {
fputs(buf, codeOutput[currentSample]);
fputs("\n", codeOutput[currentSample]);
}
|
Since GET requests should not modify any state on the server and should be "read-only" usually CSRF protection should not be needed for GET requests.
The problem about leakage is mostly related to browser usage because GET requests usually do not contain a body and thus the token is sent as request parameter. Thus the CSRF token could be visible through shoulder surfing, stored as a bookmark, appear in the browser history or logged on the server (altough logging also applies to AJAX requests).
Since you are talking about AJAX requests most of this leakage does not apply, although setting it in header may help in case of URLs appearing in the logs, but logs could also contain headers.
But actually using a custom header (with or without token) is often used to prevent CSRF attacks because AJAX requests cannot set custom headers cross-domain other than
Accept
Accept-Language
Content-Language
Last-Event-ID
Content-Type
Thus using a custom header like X-Requested-With: XMLHttpRequest which is e.g. set by jQuery and verifying this header on the server can prevent CSRF attacks.
Last but not least there is one interesing article about having the same token for GET and POST requests and having same-origin access to the GET request via an XSS vulnerability of a separate web application in the same origin where the token can be leaked from the GET request and used for a POST. The solution there is to either not use CSRF tokens for GET or use different tokens for GET and POST.
Basically regarding your questions, if your GET does not have any side-effects, a CSRF token is not really needed but would not hurt. On the other hand, if your GET request changes something on the server, you should think about using another verb (e.g. POST) depending on what you want to do and then protect your POST requests with a CSRF token or a custom header. |
I had the same problem that yours! Take a look how I solved it!
app.js
angular.module('app', ['ionic','firebase', 'app.controllers', 'app.routes', 'app.directives','app.services','app.filters',])
.run(function($ionicPlatform, $rootScope, $state) {
$ionicPlatform.ready(function() {
// Hide the accessory bar by default (remove this to show the accessory bar above the keyboard
// for form inputs)
if (window.cordova && window.cordova.plugins && window.cordova.plugins.Keyboard) {
cordova.plugins.Keyboard.hideKeyboardAccessoryBar(true);
cordova.plugins.Keyboard.disableScroll(true);
}
if (window.StatusBar) {//
// org.apache.cordova.statusbar required
StatusBar.styleDefault();
}
});
//stateChange event
$rootScope.$on("$stateChangeStart", function(event, toState, toParams, fromState, fromParams){
var user = firebase.auth().currentUser;
if (toState.authRequired && !user){ //Assuming the AuthService holds authentication logic
// User isn’t authenticated
$state.transitionTo("login");
event.preventDefault();
}
});
// Initialize Firebase Here
})
routes.js
angular.module('app.routes', ['ionicUIRouter'])
.config(function($stateProvider, $urlRouterProvider) {
$stateProvider
.state('login', {
url: '/login',
templateUrl: 'templates/login.html',
controller: 'loginCtrl'
})
.state('menu', {
url: '/menu',
templateUrl: 'templates/menu.html',
abstract:true,
controller: 'menuCtrl'
})
.state('menu.dash', {
url: '/contas',
templateUrl: 'templates/dash.html',
controller: 'contasCtrl',
authRequired: true
})
$urlRouterProvider.otherwise('/login')
});
|
This is a question which has a wide range of answers, since there are several ways to go.
The simplest way is to call the endpoints of your first app, which expose your team entities via REST API. This directly means, every time your second service needs to do something with a team entity, it retrieves one or more via HTTP. This currently is mostly covered in the uaa configuration (using JHipster uaa for authentication)
With uaa, you can just define something very similar to a JPA repository:
@AuthorizedFeignClient(name = "microservice1")
public interface TeamClient {
@RequestMapping(value = "/api/teams/", method = RequestMethod.GET)
List<Team> findTeams();
@RequestMapping(value = "/api/teams/{id}", method = RequestMethod.GET)
Team findTeam(@PathVariable("id") Long id);
}
It looks like the way you define repositories, but works with REST infernally. It also handles security stuff for you, so you can ensure only defined users or services may access your resources. More about this solution here
The advantage of this strategy is its simplicity and presence of ready to use implementations from spring and JHipster. The drawback is, that this can be quite low performance, when your design is forcing you to use such requests too often, which leads to a huge network load.
Alternative ways of solving this is using event driven systems, like Spring Cloud Bus, Event-Sourcing, CQRS etc...however, these options are not directly supported by JHipster and needs some time to get in, as it is not trivial. |
Like in HiveServer2 the empty client authorization may actually be a red herring.
The first HTTP request doesn't have the header but it is generally sent after the SPNEGO challenge from the server.
I wasn't actually aware that the SparkSQL thrift server could be used in the same way that Hive can be. Do you know whether it has Trusted Proxy support - as is implemented in many services in Hadoop? This is what allows a third part component such as Apache Knox to act on behalf of another user by asserting the authenticated user's name via doAs query param. It also assures that the doAs is coming from an identity it trusts. In this case, via kerberos/SPNEGO authentication.
If it doesn't have support for Trusted Proxies then it will not work straight out of the box. Either it would need to be added to SparkSQL thrift server or a custom dispatch provider created for SparkSQL in Knox. The custom dispatch would allow us to propagate the user identity as expected by SparkSQL.
Hope that is helpful.
--larry |
Try below code you will get result.
<?php
$data=array("createTransactionRequest" => array(
"merchantAuthentication" => array(
"name" => "2bU77DwM",
"transactionKey" => "92x86d7M7f6NHK98"
),
"refId" => "9898989898",
"transactionRequest" => array(
"transactionType" => "authCaptureTransaction",
"amount" => "25",
"payment" => array(
"creditCard" => array(
"cardNumber" => "5424000000000015",
"expirationDate" => "1220",
"cardCode" => "999"
)
)
)
)
);
$data_string = json_encode($data);
$ch = curl_init('https://apitest.authorize.net/xml/v1/request.api');
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "POST");
curl_setopt($ch, CURLOPT_POSTFIELDS, $data_string);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HTTPHEADER, array(
'Content-Type: application/json',
'Content-Length: ' . strlen($data_string))
);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
$result = curl_exec($ch);
curl_close($ch);
// below is my code
$final_result = json_decode( preg_replace('/[\x00-\x1F\x80-\xFF]/', '', $result), true );
echo "<pre>";
print_r($final_result);
?>
you just need to use
$output = json_decode(
preg_replace('/[\x00-\x1F\x80-\xFF]/', '', $result), true );
print_r($output);
i have checked it! its working for me.
Hope this will help! |
While on the first glance it seems to be straightforward, there are a couple of hurdles I encountered.
So I am providing steps that worked fine for me (to encrypt the appSettings section) using the default crypto provider:
Encrypt sections in the web.config:
Open Admin command shell (run as administrator!). The command prompt will be on C: which is assumed for the steps below.Further assumed is that the application is deployed on D:\Apps\myApp - replace this by the path you're using in step 3.
cd "C:\Windows\Microsoft.NET\Framework64\v4.0.30319", on 32 bit Windows systems use Framework instead of Framework64
cd /D "D:\Apps\myApp"Note: The /D switch will change the drive automatically if it is different from your current drive. Here it will change the path and drive, so the current directory will be D:\Apps\myApp afterwards.
c:aspnet_regiis -pef appConfig .
You should see this message:
Microsoft (R) ASP.NET RegIIS version 4.0.30319.0
Administration utility to install and uninstall ASP.NET on the local machine.
Copyright (C) Microsoft Corporation. All rights reserved.
Encrypting configuration section...
Succeeded!
You can also Decrypt sections in the web.config:
These are the same steps, but with option -pdf instead of -pef for aspnet_regiis.
It is also possible to encrypt other sections of your web.config, for example you can encrypt the connection strings section via:
aspnet_regiis -pe "connectionStrings" -app "/SampleApplication"
More details about that can be found here.
Note: The encryption above is transparent to your web application, i.e. your web application doesn't recognize that the settings are encrypted. You can also choose to use non-transparent encryption, for example by using Microsoft's DPAPI or by using AES along with the Framework's AES Class. How it is done with DPAPI I have described here at Stackoverflow. DPAPI works very similar in a sense that it uses the machine's or user credential's keys. Generally, non-transparent encryption gives you more control, for instance you can add a SALT, or you can use a key based on a user's passphrase. If you want to know more about how to generate a key from a passphrase, look here. |
The alternative way is by using an application process. I much prefer that way over enabling public access to a db procedure.
You can find the how to in the blog linked and partly copied below. The blog is by Joel Kallman, director of software development at Oracle and the manager for Apex. This guy is worth listening to.
http://joelkallman.blogspot.be/2014/03/yet-another-post-how-to-link-to.html
I've copied over most of the blog and updated the links to working documentation links.
Firstly, a way not to do this is via a PL/SQL procedure that is called
directly from a URL. I see this "solution" commonly documented on the
Internet, and in general, it should not be followed. The default
configuration of Oracle Application Express has a white list of entry
points, callable from a URL. For security reasons, you absolutely
want to leave this restriction in place and not relax it. This is
specified as the PlsqlRequestValidationFunction for mod_plsql and
security.disableDefaultExclusionList for Oracle REST Data Services
(nee APEX Listener). With this default security measure in place, you
will not be able to invoke a procedure in your schema from a URL.
Good!
The easiest way to return an image from a URL in an APEX application
is either via a RESTful Service or via an On-Demand process. This
blog post will cover the On-Demand process. It's definitely easier to
implement via a RESTful Service, and if you can do it via a RESTful
call, that will always be much faster - Kris has a great example how
to do this. However, one benefit of doing this via an On Demand
process is that it will also be constrained by any conditions or
authorization schemes that are in place for your APEX application
(that is, if your application requires authentication and
authorization, someone won't be able to access the URL unless they are
likewise authenticated to your APEX application and fully authorized).
Navigate to Application Builder -> Shared Components -> Application Items
Click Create
Name: FILE_ID
Scope: Application
Session State Protection: Unrestricted
Navigate to Application Builder -> Shared Components -> Application Processes
Click Create
Name: GETIMAGE
Point: On Demand: Run this application process when requested by a page process.
Click Next
For Process Text, enter the following code:
begin
for c1 in (select *
from my_image_table
where id = :FILE_ID) loop
--
sys.htp.init;
sys.owa_util.mime_header( c1.mime_type, FALSE );
sys.htp.p('Content-length: ' || sys.dbms_lob.getlength( c1.blob_content));
sys.htp.p('Content-Disposition: attachment; filename="' || c1.filename || '"' );
sys.htp.p('Cache-Control: max-age=3600'); -- tell the browser to cache for one hour, adjust as necessary
sys.owa_util.http_header_close;
sys.wpg_docload.download_file( c1.blob_content );
apex_application.stop_apex_engine;
end loop;
end;
Then, all you need to do is construct a URL in your application which calls this application process, as described in the Application Express Application Builder Users' Guide. You could manually construct a URL using APEX_UTIL.PREPARE_URL, or specify a link in the declarative attributes of a Report Column. Just be sure to specify a Request of APPLICATION_PROCESS=GETIMAGE (or whatever your application process name is). The URL will look something like:
f?p=&APP_ID.:0:&APP_SESSION.:APPLICATION_PROCESS=GETIMAGE:::FILE_ID:<some_valid_id>
That's all there is to it.
A few closing comments:
Be mindful of the authorization scheme specified for the application process. By default, the Authorization Scheme will be "Must Not Be Public User", which is normally acceptable for applications requiring authentication. But also remember that you could restrict these links based upon other authorization schemes too.
If you want to display the image inline instead of being downloaded by a browser, just change the Content-Disposition from 'attachment' to 'inline'.
A reasonable extension and optimization to this code would be to add a version number to your underlying table, increment it every time the file changes, and then reference this file version number in the URL. Doing this, in combination with a Cache-Control directive in the MIME header would let the client browser cache it for a long time without ever running your On Demand Process again (and thus, saving your valuable database cycles).
Application Processes can also be defined on the page-level, so if you wished to have the download link be constrained by the authorization scheme on a specific page, you could do this too.
Be careful how this is used. If you don't implement some form of browser caching, then a report which displays 500 images inline on a page will result in 500 requests to the APEX engine and database, per user per page view! Ouch! And then it's a matter of time before a DBA starts hunting for the person slamming their database and reports that "APEX is killing our database". There is an excellent explanation of cache headers here.
Once again - credits go to Joel Kallman. |
You need to get the HttpConfiguration instance from the GlobalConfiguration object and call the MapHttpAttributeRoutes() method from inside the RegisterArea method of the AreaRegistration.cs.
public override void RegisterArea(AreaRegistrationContext context)
{
GlobalConfiguration.Configuration.MapHttpAttributeRoutes();
//... omitted code
}
This must be done for each Area.
Finally you must in the 'WebApiConfig' remove "config.MapHttpAttributes()" method or you will get duplicate exception.
public static class WebApiConfig
{
public static void Register(HttpConfiguration config)
{
// Web API configuration and services
config.EnableCors();
// Configure Web API to use only bearer token authentication.
config.SuppressDefaultHostAuthentication();
config.Filters.Add(new HostAuthenticationFilter(OAuthDefaults.AuthenticationType));
// Web API routes
//config.MapHttpAttributeRoutes();
config.Routes.MapHttpRoute(
name: "DefaultApi",
routeTemplate: "api/{controller}/{id}",
defaults: new { id = RouteParameter.Optional }
);
}
}
|
The generated token will most likely be a JWT (Get Started with JSON Web Tokens), which means it's a self-contained token that is signed with a secret/key that only the server or other trusted parties know.
JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. This information can be verified and trusted because it is digitally signed.
(emphasis is mine)
This means that when receiving the token the server can ensure that:
the token was originally issued by a trusted party by checking that the signature is valid.
the token is associated with a user that has permissions to perform the following request because the token itself contains information that uniquely identifier that user.
This type of approach has the side-benefit that the server does not need to keep track or store the generated tokens in order to validate them at a later time. Since no one else has the secret/key you can't modify the token without making the signature component invalid, which would then mean a faked token would end up being rejected by the server.
This is a simplified description of what happens, there are much more details around how to issue and validate tokens correctly. You should read the OAuth2 and OpenID Connect specification to learn more on the subject of token-based authentication.
Also note that I assumed a JWT token because it's the format that currently has the most widespread adoption to accomplish scenarios like these ones and it's also the token format to use in conjunction with OAuth2 and OpenID Connect. However, it's still possible to achieve the same with other token formats. |
Most card emulation techniques (proxmark, NFC phone, UID-changeable cards, etc.) do not provide perfect emulators.
First, there are obvious differences: some transceivers cannot emulate all UIDs, ATQA or SAK (as noted in other answer); but there are also many protocol issues with them while handling errors, or when you go slightly out of spec.
Some things I noticed while working on a transceiver driver, and testing with various cards:
some clone cards do not handle power up -> WUPA -> SEL (full selection command, with CRC) sequence correctly, they assume first request after WUPA is CL1 (short anticollision command), so they make a collision while it works with genuine cards,
some clone cards still answer to SEL if you do WUPA -> WUPA, while they should not w.r.t. the ISO14443-3 state machine (they should be stuck in IDLE state),
error handling is sometimes broken (in particular when Mifare authentication fails),
of course, UID-changeable "Chinese" cards actually answer to unlocking "magic" commands, genuine cards do not,
and at last, NXP introduced an Originality Check in its cards (they call it that way), it is marketed as a way to check card is genuine (I never used it, documentation is not public, so I can't comment those claims), and NXP guarantees a given UID is not issued twice.
With all these, you can probably detect and reject all current clones and emulator implementations, but you cannot guarantee nobody will ever create a perfect one.
If you truly rely on un-clonable cards, Mifare Classic is probably not the relevant technology, as all "security" features have been reverse engineered. Today, Mifare Classic should be considered as a cleartext-equivalent copiable memory. |
{"error":"invalid_client","message":"Client authentication failed"}
means that the client_id and client_secret you're currently sending is one of the following:
client_id and client_secret being sent is not registered in your oauth server
client_id and client_secret being sent is not of the grant type that you specify in the post body
For me, the password_client column value for that particular client
was 0. i manually changed it to 1 but it didn't help. – Kanav Oct 4 at
3:25
As I read your comment to some of the answer makes this one is obvious that you use the wrong type of client_id and client_secret for your password grant flow, so your case was the latter. You can verify that by finding the entry in oauth_clients table for your supplied client_id and client_secret
"grant_type": "password,"
"client_id": "3,"
"client_secret": "8BUPCSyYEdsgtZFnD6bFG6eg7MKuuKJHLsdW0k6g,"
SELECT * FROM oauth_clients WHERE id = 3 AND secret = '8BUPCSyYEdsgtZFnD6bFG6eg7MKuuKJHLsdW0k6g' AND password_client=1;
in your oauth_clients table to check that whether your supplied client_id and client_secret exists for password grant flow in your oauth server. And also you don't manually change the type of client by simply switching the value of that column, you have to create a new client for password grant type.
php artisan migrate
This command will migrate the laravel/passport migration file to generate two default clients for each type that laravel/passport currently supported. You can also create a client for password grant flow with
php artisan passport:client --password
When you have a new client for password grant type, try POSTing again to /oauth/token to generate a token with your new generated client_id and client_secret |
The answer by Nir Levy is correct but I'll try to give you a little more context on what's going on.
var express = require('express');
// Import the meter export and assign it a variable for reuse.
var meter = require('./meter');
var app = express();
app.listen(3000);
console.log('listening on 3000 port')
As per Nir's answer, you'll just use meter.hi for handling get requests to /
app.get('/', meter.hi);
What exactly is happening here is JavaScript passes all the arguments to the meter.hi method. In case of express there will be 3 arguments here - request, response and next passed in that order.
In the module meter you are just using request and response alone, which is fine, but if there's any other processing required or arguments for meter.hi needs to vary you might want to follow the following practise.
app.get('/', function( req, res ) {
// You can process the request here.
// Eg. authentication
meter.hi(req, res);
});
Where you have more control over the arguments being passed to your modules. |
The techniques "eval()" and "JSON.parse()" use mutually exclusive formats.
With "eval()" parenthesis are required.
With "JSON.parse()" parenthesis are forbidden.
Beware, there are "stringify()" functions that produce "eval" format. For ajax, you should use only the JSON format.
While "eval" incorporates the entire JavaScript language, JSON uses only a tiny subset of the language. Among the constructs in the JavaScript language that "eval" must recognize is the "Block statement" (a.k.a. "compound statement"); which is a pair or curly braces "{}" with some statements inside. But curly braces are also used in the syntax of object literals. The interpretation is differentiated by the context in which the code appears. Something might look like an object literal to you, but "eval" will see it as a compound statement.
In the JavaScript language, object literals occur to the right of an assignment.
var myObj = { ...some..code..here... };
Object literals don't occur on their own.
{ ...some..code..here... } // this looks like a compound statement
Going back to the OP's original question, asked in 2008, he inquired why the following fails in "eval()":
{ title: "One", key: "1" }
The answer is that it looks like a compound statement. To convert it into an object, you must put it into a context where a compound statement is impossible. That is done by putting parenthesis around it
( { title: "One", key: "1" } ) // not a compound statment, so must be object literal
The OP also asked why a similar statement did successfully eval:
[ { title: "One", key: "1" }, { title: "Two", key: "2" } ]
The same answer applies -- the curly braces are in a context where a compound statement is impossible. This is an array context, "[...]", and arrays can contain objects, but they cannot contain statements.
Unlike "eval()", JSON is very limited in its capabilities. The limitation is intentional. The designer of JSON intended a minimalist subset of JavaScript, using only syntax that could appear on the right hand side of an assignment. So if you have some code that correctly parses in JSON...
var myVar = JSON.parse("...some...code...here...");
...that implies it will also legally parse on the right hand side of an assignment, like this..
var myVar = ...some..code..here... ;
But that is not the only restriction on JSON. The BNF language specification for JSON is very simple. For example, it does not allow for the use of single quotes to indicate strings (like JavaScript and Perl do) and it does not have a way to express a single character as a byte (like 'C' does). Unfortunately, it also does not allow comments (which would be really nice when creating configuration files). The upside of all those limitations is that parsing JSON is fast and offers no opportunity for code injection (a security threat).
Because of these limitations, JSON has no use for parenthesis. Consequently, a parenthesis in a JSON string is an illegal character.
Always use JSON format with ajax, for the following reasons:
A typical ajax pipeline will be configured for JSON.
The use of "eval()" will be criticised as a security risk.
As an example of an ajax pipeline, consider a program that involves a Node server and a jQuery client. The client program uses a jQuery call having the form $.ajax({dataType:'json',...etc.});. JQuery creates a jqXHR object for later use, then packages and sends the associated request. The server accepts the request, processes it, and then is ready to respond. The server program will call the method res.json(data) to package and send the response. Back at the client side, jQuery accepts the response, consults the associated jqXHR object, and processes the JSON formatted data. This all works without any need for manual data conversion. The response involves no explicit call to JSON.stringify() on the Node server, and no explicit call to JSON.parse() on the client; that's all handled for you.
The use of "eval" is associated with code injection security risks. You might think there is no way that can happen, but hackers can get quite creative. Also, "eval" is problematic for Javascript optimization.
If you do find yourself using a using a "stringify()" function, be aware that some functions with that name will create strings that are compatible with "eval" and not with JSON. For example, in Node, the following gives you function that creates strings in "eval" compatible format:
var stringify = require('node-stringify'); // generates eval() format
This can be useful, but unless you have a specific need, it's probably not what you want. |
I modified the code in this answer to insert an <authentication/> element into all SOAP body requests:
@Override
public boolean handleRequest(MessageContext messageContext) throws WebServiceClientException {
logger.trace("Enter handleMessage");
try {
SaajSoapMessage request = (SaajSoapMessage) messageContext.getRequest();
addAuthn(request);
} catch (Exception e) {
logger.error(e.getMessage(),e);
}
return true;
}
protected void addAuthn(SaajSoapMessage request) throws TransformerException {
Transformer identityTransform = TransformerFactory.newInstance().newTransformer();
DOMResult domResult = new DOMResult();
identityTransform.transform(request.getPayloadSource(), domResult);
Node bodyContent = domResult.getNode();
Document doc = (Document) bodyContent;
doc.getFirstChild().appendChild(authNode(doc));
identityTransform.transform(new DOMSource(bodyContent), request.getPayloadResult());
}
protected Node authNode(Document doc) {
Element authentication = doc.createElementNS(ns, "authentication");
Element username = doc.createElementNS(ns, "username");
username.setTextContent(authn.getUsername());
Element password = doc.createElementNS(ns, "password");
password.setTextContent(authn.getPassword());
authentication.appendChild(username);
authentication.appendChild(password);
return authentication;
}
This solution was used because the WebServiceMessageCallback would require me to change the Document, and the SaajSoapMessageFactory is activated before the soap body has been inserted by the configured Jaxb2Marshaller. |
Did you read the Heroku guide to OAuth? It's pretty helpful. The flow would be:-
Salesforce app issues a redirect to your Heroku - GET https://id.heroku.com/oauth/authorize?client_id={client-id}&response_type=code&scope={scopes}&state={anti-forgery-token}
After user has authorized access there is a callback to your Salesforce app with an exchange token
You Salesforce app then needs to exchange the token for an Access token with your Heroku app with the relevant scopes to access the data at Salesforce
I'm not sure if this is what you want though since the whole point of OAuth is not authentication but authorization ie. the OAuth flow is not designed to identify the user, but to enable you client (Salesforce in this case) to access the user's resources held by the provider (your Heroku app in this case).
Since you want Authentication, not Authorization, there are a couple of approaches you could take depending on how much work you want to put in vs how secure it needs to be (you have to make a call on this).
Quick and dirty but not very secure
You could just check the referrer Header on Heroku and if the client is anything other than your Salesforce app then you return a 403 Forbidden or 401 Unauthorized. It's not very reliable since referrer is not overly reliable but its quick and straightforward if you do not have a great understanding of authentication and just want something quick and basic.
Send a client ID with each request
This could be a Header or be in the body of the request. For it to be secure though you will need to encrypt it since you say you do not want to use SSL/TLS. So you will need to encrypt/decrypt the client ID at each end.
A basic approach is to just use some symmetric key that you share between your client (Salesforce) and provider (Heroku) which you store securely within each app somewhere so that hackers cannot read it. You also share (and securely store) some ID string (ideally some long random hash).
The flow would go like this:-
Salesforce app takes the random ID string and uses the symmetric key to encrypt it. This is what you send in the request to your Heroku app.
Heroku app - on receiving an incoming request - reads the encrypted value. It then uses the symmetric key to decrypt it. Your Heroku app then compares the decrypted value passed in the request and the random ID string (it also has stored locally) and if they are the same you have some degree of confidence that the source of the request was your Salesforce app. If not you deny the request.
Authentication is a big subject, as is encryption. If you really need to protect the data and there is a risk of you being sued if you do not, then you need to do some more research. If the data is not sensitive (or particularly valuable to anyone else) and you are just trying to have some basic front gate which reduces other applications from exerting a load on your application then you could consider just checking the referrer as a first attempt. |
Based on my understanding, the id_token is used for the client to verify the current user info. To check the specific permission for the resource, we normally use the access_token.
To verify id_token issued from Azure AD which integrate with Azure using the OpenId connect protocol, we can follow the steps below( refer OpenId specification):
If the ID Token is encrypted, decrypt it using the keys and algorithms that the Client specified during Registration that the OP was to use to encrypt the ID Token. If encryption was negotiated with the OP at Registration time and the ID Token is not encrypted, the RP SHOULD reject it.
The Issuer Identifier for the OpenID Provider (which is typically obtained during Discovery) MUST exactly match the value of the iss (issuer) Claim.
The Client MUST validate that the aud (audience) Claim contains its client_id value registered at the Issuer identified by the iss (issuer) Claim as an audience. The aud (audience) Claim MAY contain an array with more than one element. The ID Token MUST be rejected if the ID Token does not list the Client as a valid audience, or if it contains additional audiences not trusted by the Client.
If the ID Token contains multiple audiences, the Client SHOULD verify that an azp Claim is present.
If an azp (authorized party) Claim is present, the Client SHOULD verify that its client_id is the Claim Value.
If the ID Token is received via direct communication between the Client and the Token Endpoint (which it is in this flow), the TLS server validation MAY be used to validate the issuer in place of checking the token signature. The Client MUST validate the signature of all other ID Tokens according to JWS [JWS] using the algorithm specified in the JWT alg Header Parameter. The Client MUST use the keys provided by the Issuer.
The alg value SHOULD be the default of RS256 or the algorithm sent by the Client in the id_token_signed_response_alg parameter during Registration.
If the JWT alg Header Parameter uses a MAC based algorithm such as HS256, HS384, or HS512, the octets of the UTF-8 representation of the client_secret corresponding to the client_id contained in the aud (audience) Claim are used as the key to validate the signature. For MAC based algorithms, the behavior is unspecified if the aud is multi-valued or if an azp value is present that is different than the aud value.
The current time MUST be before the time represented by the exp Claim.
The iat Claim can be used to reject tokens that were issued too far away from the current time, limiting the amount of time that nonces need to be stored to prevent attacks. The acceptable range is Client specific.
If a nonce value was sent in the Authentication Request, a nonce Claim MUST be present and its value checked to verify that it is the same value as the one that was sent in the Authentication Request. The Client SHOULD check the nonce value for replay attacks. The precise method for detecting replay attacks is Client specific.
If the acr Claim was requested, the Client SHOULD check that the asserted Claim Value is appropriate. The meaning and processing of acr Claim Values is out of scope for this specification.
If the auth_time Claim was requested, either through a specific request for this Claim or by using the max_age parameter, the Client SHOULD check the auth_time Claim value and request re-authentication if it determines too much time has elapsed since the last End-User authentication.
And you can acquire the signing key data necessary to validate the signature by using the OpenID Connect metadata document located at based on the endpoint you were developing:
https://login.microsoftonline.com/common/.well-known/openid-configuration
https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration |
So let me get this right; you don't just want to provide the source code for the website, but also to prove to any visitors that the site is in fact hosted and running on the provided code?
You might be able to make something that seems fairly convincing (exposing the inner files in your system directly through a browser?), but I'm not sure you can ever prove your claim formally.
Think about it: Even if you expose the inner guts of your application via the browser, you're still controlling everything that is being shown, so how can you possibly convince a visitor that what they are seeing is not just a fake copy that appears similar to your publicly facing site?
In any case, why would you want to do this? Yes, I know you asked this as a hypothetical question, but let's consider the practical consequences anyway: Assuming your goal is to make your users trust you and your service, you can probably achieve this more easily by following a few simple rules.
Provide the source code for your system in an open repository (e.g. Github), and provide information about this.
Provide other relevant information openly, and behave professionally and with integrity.
At some point, it comes down to basic trust, no matter what you do. If you behave openly and with integrity, and go to some length to show that you take security seriously(*see below), people are generally likely to trust you. Why? Because people are generally trusting of others so long as they can not see any incentive for you to be fooling them (whether this is wise another discussion).
(*) Sidenote: Please do and show that you take security seriously, don't just claim to do it!
How? Apply best practices for security (updated certificates and encryption, safe storage and handling of sensitive data, etc), and document this openly. A good description of reasonable security measures taken will indicate that you know what you are doing, and make it seem less likely that you're faking it. |
You can use ExtendedXmlSerializer.
If you have a class with a property that needs to be encrypted:
public class Person
{
public string Name { get; set; }
public string Password { get; set; }
}
You must implement interface IPropertyEncryption. For example, it will show the Base64 encoding, but in the real world better to use something safer, eg. RSA.:
public class Base64PropertyEncryption : IPropertyEncryption
{
public string Encrypt(string value)
{
return Convert.ToBase64String(Encoding.UTF8.GetBytes(value));
}
public string Decrypt(string value)
{
return Encoding.UTF8.GetString(Convert.FromBase64String(value));
}
}
In the Person class configuration you need to specify which properties are to be encrypted:
public class PersonConfig : ExtendedXmlSerializerConfig<Person>
{
public PersonConfig()
{
Encrypt(p => p.Password);
}
}
Then, you must register your PersonConfig class and your implementation of IPropertyEncryption. In documentation is describe configuration using Autofac. There is simple configuration:
var toolsFactory = new SimpleSerializationToolsFactory();
// Register your config class
toolsFactory.Configurations.Add(new PersonConfig());
// If you want to use property encryption you must register your implementation of IPropertyEncryption, e.g.:
toolsFactory.EncryptionAlgorithm = new Base64PropertyEncryption();
ExtendedXmlSerializer serializer = new ExtendedXmlSerializer(toolsFactory);
Then you can serialize object:
var obj = new Person {Name = "John", Password = "Ab238ds2"};
var xml = serializer.Serialize(obj);
Your xml will look like:
<?xml version="1.0" encoding="utf-8"?>
<Person type="ExtendedXmlSerialization.Samples.Encrypt.Person">
<Name>John</Name>
<Password>QWIyMzhkczI=</Password>
</Person>
ExtendedXmlSerializer has many other useful features:
Deserialization xml from standard XMLSerializer
Serialization class with property interface
Serialization circular reference and reference Id
Deserialization of old version of xml
Property encryption
Custom serializer
ExtendedXmlSerializer supports .net 4.5 and .net Core. You can integrate it with WebApi and AspCore. |
In one way, it's really trivial. In another way, it's very hard.
Here is how it is trivial:
There is a merge. The merge is either done right, or it's done wrong. Look at the merge and see whether it was right or wrong. If wrong, the author and committer tell you who did that (to the extent that you trust author and committer1).
Here is how it is very hard:
There is a merge. Is it right?
There is no automatic way to find the answer to that question.
If you have good automated tests, you can get arbitrarily close to automatically answering the question. Good automated tests are not easy, and become very hard if you try to make them very good.
If you don't have automated tests, but do have some people who are good at doing merges correctly, you can perhaps do spot-checks: simply have these good folks repeat the merge themselves, and compare their result to the earlier result. If it is the same, the original merge was done correctly. If it is different, it was not.
The presence or absence of merge conflicts is not a sure indicator as to whether a merge was done correctly. Git has no secret knowledge: it merely combines text according to a fixed set of rules. That said, if you wish to pick out, for manual testing, specifically those merges that did see conflicts, you can automate this part. Just repeat the merge (making sure git rerere is not turned on) and see if there are conflicts.
Repeating a merge
Above, the phrase "repeat the merge" appears twice. You may wonder how you can repeat a merge. This is mostly trivial.2 First, find each merge's commit ID:
git rev-list --merges <start-point> [additional options if desired]
This produces a list of candidate IDs. (You will want to save them all somewhere, and annotate which ones you have checked so that you do not have to re-check them later. Use whatever ad hoc solution you like here.)
Then, given a merge ID, simply check out its first parent as a detached HEAD, and run git merge on its remaining parent IDs (this bit relies on POSIX shell behavior and syntax):
id=c79a113... # or whatever, from your list
set -- $(git rev-parse ${id}^@)
git checkout $1 # check out first parent
shift # now that we have $1 out, discard $1
git merge $* # (attempt to) merge the remaining commits
The rev^@ notation is from gitrevisions and means "all parents of the given revision". If all your merges are standard two-parent merges, you can use the clearer (but less general):
git checkout ${id}^1
git merge ${id}^2
(the gitrevisions syntax and shell set and shift technique allows for octopus merges).
Since this merge is being made on a detached HEAD, the resulting merge, if it completes automatically, can be trivially abandoned by checking out any other commit or branch-name. If the git merge fails with a conflict, you have a conflicted merge, with the usual results in the index; you can go on to finish the merge, or use git merge --abort to terminate it. (The exit status from git merge will be 0 if the merge succeeds and nonzero if not.)
1Remember that, except for commit transfers through push or fetch, every Git operation is done locally by someone in complete control of his or her own repository. The user can of course lie and claim to be someone else. If you control this at all, you can and must do so at the transfer boundary: when you pick up a commit from someone else—whether by git fetch from your end, or git push from their end—you know who you are talking to (through whatever authentication you have put in place: this is entirely outside Git, and is usually from ssh or https these days, often wrapped up in third party extensions like Gitolite, or provided as a service by something like GitHub). You have the chance, at this time, to do whatever verification you like, before accepting the commits.
Alternatively, you can check "after the fact" using, e.g., PGP signatures. If someone has PGP-signed some tags and/or some commits, and you can verify these signatures, you can choose to believe that those tags and/or commits are theirs. You can then extend that trust (to whatever extent you are willing, of course, based on the security of SHA-1 and how much you trust the signer) to those commits' and/or tags' ancestors, since building an object that matches a predefined SHA-1 and its text is at least very hard. (This is called second pre-image resistance; see, e.g., this crypto.stackexchange.com question.)
2I say "mostly trivial" because options the user supplied to git merge are not available here. You could require users who use non-standard merge options to record them in their commits, or in notes attached to the commits, but this is difficult to enforce. See also footnote 1: you can only apply enforcement rules at the transfer boundary. |
On Mac, The Selenium SafariDriver extension you used to have to install is now depreciated. On El Capitan & Sierra Apple provides it's own Safaridriver. Uninstall previous safaridriver extension (if you had installed it) and enable the new safaridriver, excerpt from link 2:
Ensure that the Develop menu is available. It can be turned on by opening Safari preferences (Safari > Preferences in the menu bar), going to the Advanced tab, and ensuring that the Show Develop menu in menu bar checkbox is checked.
Enable Remote Automation in the Develop menu. This is toggled via
Develop > Allow Remote Automation in the menu bar.
Authorize safaridriver to launch the webdriverd service which hosts
the local web server. To permit this, run /usr/bin/safaridriver once
manually and complete the authentication prompt. e.g. in terminal: /usr/bin/safaridriver -p 8000
Also, You need to be running Selenium 3.0 + (support started at 3.0.0-beta1) to use the new safari driver.
Note:
If you still have trouble maybe check the Addendum at bottom of the 2nd link. Another caveats I ran into, the new safaridriver only supports one session so maxSessions=# is no longer supported. Also, if you use npm selenium-standalone install you can update selenium version like so.
selenium-standalone install --version=3.0.1 --baseURL=https://selenium-release.storage.googleapis.com
And then boot hubs and nodes with the --version=3.0.1 flag. |
By your comment I believe you are looking for NTFS permission and not the Reporting Services permissions. Try this:
'Run as' administrator IE and open your http://localhost/Reports (or whatever is the URL)
In the top right corner click on 'Site Setting' (http://localhost/Reports/Pages/Settings.aspx)
Check the 'Security' settings (http://localhost/Reports/Pages/Settings.aspx?SelectedSubTabId=SecurityLinkID)
After you set the permissions how you need, relaunch the browser without with your normal account and try it.
For more information about how to set the permissions read here: https://msdn.microsoft.com/en-us/library/ms156034(v=sql.110).aspx
To add a user or group to a system role
Start Report Manager (SSRS).
Click Site Settings.
Click Security.
Click New Role Assignment.
In Group or user name, enter a Windows domain user or group account
in this format: \. If you are using forms
authentication or custom security, specify the user or group account
in the format that is correct for your deployment.
Select a system role, and then click OK. [Roles are cumulative, so if you select both System Administrator and System User, a user or
group will be able to perform the tasks in both roles.]
Repeat to create assignments for additional users or groups.
To add a user or group to an item role
Start Report Manager and locate the report item for which you want
to add a user or group.
Hover over the item, and click the drop-down arrow.
In the drop-down menu, click Security.
Click New Role Assignment.
Note If an item currently inherits security from a parent item, click Edit Item Security in the toolbar to change the security settings.
Then click New Role Assignment.
In Group or user name, enter a Windows domain user or group account
in this format: \. If you are using forms
authentication or custom security, specify the user or group account
in the format that is correct for your deployment.
Select one or more role definitions that describe how the user or
group should access the item, and then click OK.
Repeat to create assignments for additional users or groups.
|
If you want to build it yourself then look at node.js and ffmpeg.
Now how you capture the a.v.(audio visual) is a different question but ffmpeg can be used to push out a stream(over what protocol im not sure just had a quick browse and didnt see it easily). With ffmpeg you can you have a multitude of options of what you can do I've looked at it before and used it in a project or two and its extremely intuitive and well documented. But if your starting another library to look at is fluent-ffmpeg for node.js as it puts an easier to use wrapper over the api.
A pre made solution would be node-rtsp-rtmp-server but again with this it works off of a file like an mp4 in a folder so you would have to alter it for your own purposes.
If you were to do this first find someway to capture your a.v. and connect it to ffmpeg to save the file aswell as transcoding it to whatever filetype you want, then allow this server to stream to your required destination.
If I was you id would have a good luck at opensource repositories like github first and see whats available.
Also just to point out this is just the way to stream(and a breif overview at that) theres also the whole authentication process with the social-media services and all the connection thats needed between your server and the endpoint service. |
First, see my previous question/answer to leverage FormsAuth Tickets in Owin: OWIN Self-Host CookieAuthentication & Legacy .NET 4.0 Application / FormsAuthenticationTicket
Once you have the ability to decrypt/encrypt your FormsAuth cookie, you can leverage that in IdentityServer.
Since your hosting is most likely different than mine, use this as a reference:
/ -> our main api appBuilder
/auth -> our identityServer
Our main API appBuilder uses the cookie auth middleware as described in the associated SO post (link) above.
IdenityServer app composition root:
appBuilder.Map("/auth", idsrvApp =>
{
idsrvApp.Use((context, task) =>
{
// since we can authenticate using "Cookies" auth,
// we must add the principal to the env so we can reuse it in the UserService
// oddly, the Context.Authentication.User will clear by the time it gets there and we can't rely on it
// my best guess is because IdentityServer is not authenticated (no cookie set)
if (context.Authentication.User != null && context.Authentication.User.Identity.IsAuthenticated)
context.Environment.Add("auth.principal", context.Authentication.User);
return task.Invoke();
});
idsrvApp.UseIdentityServer(isOptions);
});
UserService.cs
public async Task PreAuthenticateAsync(PreAuthenticationContext context)
{
// if we already have an authenticated user/principal then bypass local authentication
if (_Context.Authentication.User.Identity.IsAuthenticated ||
_Context.Environment.ContainsKey("auth.principal"))
{
var principal = _Context.Authentication.User.Identity.IsAuthenticated
? _Context.Authentication.User
: (ClaimsPrincipal)_Context.Environment["auth.principal"];
context.AuthenticateResult =
new AuthenticateResult(); // set AuthenticateResult
return;
}
}
Please Note:
Use this as an example.
Enabling cookie auth on your app or api MAY unsuspectingly compromise your security to CSRF attacks. Ensure you're aware of this attack vector and take the necessary steps to reduce this risk.
|
However, SCrypt does not expose public methods for the data encryption itself, so would it make sense to pass the SCrypt hashed password to AES
SCrypt is a Key Derivation Function, so yes, that is an acceptable thing to do.
how to reliably randomize the IV?
Don't use the output of the KDF in the IV. The IV should be random for AES-CBC, so use RandomNumberGenerator.Create() to create a CSPRNG for the IV. Using the KDF output as part of the IV actually leaks the key since the IV is stored in plaintext.
An IV in AES-CBC should be random, and it should not be reused. Don't derive it from the password. You do need to store the IV somewhere. Since it looks like you're trying to encrypt files, you may just want to put the IV in at the beginning of the file. The IV is not a secret - it's OK if someone can read it. Then, when it comes time to decrypt the file, read the IV from the file, and then decrypt everything past the IV.
I would also recommend that you MAC the file, as well, as right now your application does not authenticate the encryption. |
Well it seems like I found the root cause to my problem. This is appears to be a known bug with System.Net.Http.HttpClient when using network authentication. See this article here
My initial mistake was that I wasn't catching an exceptions thrown by PostAsync. once I wrapped that inside a try/catch block I got the following exception thrown:
“This IRandomAccessStream does not support the GetInputStreamAt method because it requires cloning and this stream does not support cloning.”
The first paragraph of the article I linked to above states:
When you use the System.Net.Http.HttpClient class from a .NET
framework based Universal Windows Platform (UWP) app and send a
HTTP(s) PUT or POST request to a URI which requires Integrated Windows
Authentication – such as Negotiate/NTLM, an exception will be thrown.
The thrown exception will have an InnerException property set to the
message:
“This IRandomAccessStream does not support the GetInputStreamAt method
because it requires cloning and this stream does not support cloning.”
The problem happens because the request as well as the entity body of
the POST/PUT request needs to be resubmitted during the authentication
challenge. The above problem does not happen for HTTP verbs such as
GET which do not require an entity body.
This is a known issue in the RTM release of the Windows 10 SDK and we
are tracking a fix for this issue for a subsequent release.
The recommendation and work around that worked for me was to use the Windows.Web.Http.HttpClient instead of System.Net.Http.HttpClient
Using that recommendation, the following code worked for me:
string filePath = "Data\\postbody.txt";
string url = "https://outlook.office365.com/EWS/Exchange.asmx";
Uri requestUri = new Uri(url); //replace your Url
string contents = await ReadFileContentsAsync(filePath);
string search_str = txtSearch.Text;
Debug.WriteLine("Search query:" + search_str);
contents = contents.Replace("%SEARCH%", search_str);
Windows.Web.Http.Filters.HttpBaseProtocolFilter hbpf = new Windows.Web.Http.Filters.HttpBaseProtocolFilter();
Windows.Security.Credentials.PasswordCredential pcred = new Windows.Security.Credentials.PasswordCredential(url, "[email protected]", "password");
hbpf.ServerCredential = pcred;
HttpClient request = new HttpClient(hbpf);
Windows.Web.Http.HttpRequestMessage hreqm = new Windows.Web.Http.HttpRequestMessage(Windows.Web.Http.HttpMethod.Post, new Uri(url));
Windows.Web.Http.HttpStringContent hstr = new Windows.Web.Http.HttpStringContent(contents, Windows.Storage.Streams.UnicodeEncoding.Utf8, "text/xml");
hreqm.Content = hstr;
// consume the HttpResponseMessage and the remainder of your code logic from here.
try
{
Windows.Web.Http.HttpResponseMessage hrespm = await request.SendRequestAsync(hreqm);
Debug.WriteLine(hrespm.Content);
String respcontent = await hrespm.Content.ReadAsStringAsync();
}
catch (Exception ex)
{
string e = ex.Message;
Debug.WriteLine(e);
}
Hopefully this is helpful to someone else hitting this issue. |
In case if anybody stumbles in this, Here is the solution
Go to pod, NXOAuth2Client.m and replace the method
- (void)requestTokenWithAuthGrant:(NSString *)authGrant redirectURL:(NSURL *)redirectURL; with the below code
- (void)requestTokenWithAuthGrant:(NSString *)authGrant redirectURL:(NSURL *)redirectURL;
{
NSAssert1(!authConnection, @"authConnection already running with: %@", authConnection);
NSMutableURLRequest *tokenRequest = [NSMutableURLRequest requestWithURL:tokenURL];
[tokenRequest setHTTPMethod:self.tokenRequestHTTPMethod];
[authConnection cancel]; // just to be sure
self.authenticating = YES;
NSMutableDictionary *parameters = [NSMutableDictionary dictionaryWithObjectsAndKeys:
@"authorization_code", @"grant_type",
clientId, @"client_id",
// clientSecret, @"client_secret",
[redirectURL absoluteString], @"redirect_uri",
authGrant, @"code",
nil];
if (self.desiredScope) {
[parameters setObject:[[self.desiredScope allObjects] componentsJoinedByString:@" "] forKey:@"scope"];
}
if (self.customHeaderFields) {
[self.customHeaderFields enumerateKeysAndObjectsUsingBlock:^(NSString *key, NSString *obj, BOOL *stop) {
[tokenRequest addValue:obj forHTTPHeaderField:key];
}];
}
if (self.additionalAuthenticationParameters) {
[parameters addEntriesFromDictionary:self.additionalAuthenticationParameters];
}
authConnection = [[NXOAuth2Connection alloc] initWithRequest:tokenRequest
requestParameters:parameters
oauthClient:self
delegate:self];
authConnection.context = NXOAuth2ClientConnectionContextTokenRequest;
}
Commenting clientSecret solved the issue |
The Team Foundation Server itself has a setting called the Notification Uri, whenever anything asks where it can find stuff it will use this Uri to send back the location.
In you case the build server wants to know all kinds of things, download source code, the build process template, upload test results etc. When asking where to grab these from or send these to, TFS will respond with that Notification Uri.
Your server is configured to use a self-signed SSL certificate, the server is configured to send back the secure location through it's notification Uri property, thus your client needs to build a trust relation to establish the communication.
There are three solutions:
install a trusted certificate on the TFS server (in case you're in an active dircetory setup, this may not be as hard as it seems).
install the self-signed certificate in the trusted root certificate store of each windows computer connecting to the TFS server
Turn of SSL on your TFS server by removing the cert from the IIS binding and reconfiguring the server and notification URI.
Note: disabling SSL may introduce holes in your security setup depending on how authentication is configured. If you server accepts basic auth, or when you upgrade to TFS2017 and activate support for Personal Access Tokens, your authentication token may be sent over the wire in clear text. |
yes you can access LocalStorage in your add-in. Indeed, your add-in is a website and in the case of Outlook Desktop the underlying browser is IE. Take care of the case of Safari incognito mode where localStorage is disabled.
RoamingSettings and LocalStorage are different and should be used for different purposes. RoamingSettings is a "per mail account storage" provided by Office.js. LocalStorage is a "per website storage" provided by the browser, precisely, for a given browser and for the same domain you can access the values in LocalStorage.
For example, with RoamingSettings, for a given Microsoft mail account, you can reuse values between your add-in loaded in Office Desktop and in Outlook Online. Of course it can be used only in the context of an add-in.
An example of usage of LocalStorage would be, if you have a web application served with the same domain but which is not the add-in. Then, for the same browser, LocalStorage can be use to share things like token authentication etc. |
At the end you will see a catch statement, my question is will this catch statement work for all of the promises?
No, it only works for the outer promise, the one returned by the then call. This needs to be rejected for the catch callback to be activated. To get this promise rejected, either apiAccountStatus(…) must reject or the then callback must throw an exception or return a promise that will be rejected.
This last thing is what you were missing - you were creating more promises inside that then callback, but you weren't returning them so that they will not chain. You have to do
export function authenticationSignIn(email, password) {
return (dispatch) => {
dispatch({ type: AUTHENTICATION_REQUEST });
apiAccountStatus(email, password)
.then(({data: {status}}) => {
if (status === 'ACCOUNT_CREATED') {
return apiSignIn(email, password)
// ^^^^^^
.then(({ data: sessionData }) => {
return apiIndexAccounts()
// ^^^^^^
.then(({ data: accountsData }) => {
dispatch({ type: AUTHENTICATION_SUCCESS });
window.router.transitionTo('/dashboard/home');
});
});
} else if (status === 'SOMETHING ELSE') {
// TODO: HANDLE SOMETHING ELSE
}
})
.catch(({ response }) => {
dispatch({ type: AUTHENTICATION_FAILURE });
dispatch(notificationShow('ERROR', response.data.type));
});
};
}
|
I'm not an authority on OpenID Connect but here are my two cents...
Authorization Code Flow and nonce
Do I need to verify the nonce on the client side when using Authorization Code Flow?
The spec says that if you send a nonce in the authorization request then you MUST verify it (see "nonce" in http://openid.net/specs/openid-connect-core-1_0.html#IDToken). However, sending the nonce is not required for the authorization code flow so you could leave it out altogether. In the authorization code flow case, I think you're right in that the replay attack is mitigated by the code--making the nonce unnecessary. However, since one could be using an implicit/hybrid flow where the nonce is required, the id_token validation logic might as well be the same in that "If a nonce value was sent in the Authentication Request, a nonce Claim MUST be present and its value checked"
Authorization Code Flow and ID Token
What are the benefits of using Authorization Code Flow in web applications?
I think the benefit of authorization code flow is that you keep the tokens out of the browser and can likely keep the tokens only on the server side.
Here's a helpful link about choosing the right flow for the right scenario |
You are going to want to setup routes in the angular app to handle the front end of your application. Then create a service to handle the auth0 authentication of the application,
This is a overview of setting up a secure set of routes and a public set of routes in your app. Once someone logs in with oauth they will be forwarded to the secure routes.
So starting out here is the routes. We will specify a secure and public in the app.routing.ts file
Routes
const APP_ROUTES: Routes = [
{ path: '', redirectTo: '/home', pathMatch: 'full', },
{ path: '', component: PublicComponent, data: { title: 'Public Views' }, children: PUBLIC_ROUTES },
{ path: '', component: SecureComponent, canActivate: [Guard], data: { title: 'Secure Views' }, children: SECURE_ROUTES }
];
Ok so now that you have that. You can create a templates directory. Inside create secure.component and public.component. Then I create a directory called secure and one called public which I put all of my components dependent on the authentication level to access them. I also add their routes to a file in those directories to keep everything separate.
Notice in my routes above I have the [Guard] setup on the secure. This will block anyone from going to the secure routes without authentication.
Here is an example of what that guard looks like.
import { Injectable } from '@angular/core';
import { CanActivate, Router, ActivatedRouteSnapshot, RouterStateSnapshot } from '@angular/router';
import { Auth } from './auth.service';
import { Observable } from 'rxjs/Observable';
@Injectable()
export class Guard implements CanActivate {
constructor(protected router: Router, protected auth: Auth ) {}
canActivate() {
if (localStorage.getItem('access_token')) {
// logged in so return true
return true;
}
// not logged in so redirect to login page
this.router.navigate(['/home']);
return false;
}
}
Now that we have that we have the routes secured with Guard. We can setup the auth0 client.
Create a config file with your credentials you get from auth0
interface AuthConfiguration {
clientID: string,
domain: string,
callbackURL: string
}
export const myConfig: AuthConfiguration = {
clientID: 'clietnifherefromauth0',
domain: 'username.auth0.com',
// You may need to change this!
callbackURL: 'http://localhost:3000/endpoint/'
};
Then to actually authenticate someone. Receive their data and save the token as well as their data to the local storage. Also provide a logout function and a check to make sure they are logged in.
import { Injectable } from '@angular/core';
import { tokenNotExpired, JwtHelper } from 'angular2-jwt';
import { Router } from '@angular/router';
import { myConfig } from './auth.config';
declare var Auth0Lock: any;
var options = {
theme: {
logo: '/img/logo.png',
primaryColor: '#779476'
},
languageDictionary: {
emailInputPlaceholder: "[email protected]",
title: "Login or SignUp"
},
};
@Injectable()
export class Auth {
lock = new Auth0Lock(myConfig.clientID, myConfig.domain, options, {});
userProfile: Object;
constructor(private router: Router) {
this.userProfile = JSON.parse(localStorage.getItem('profile'));
this.lock.on('authenticated', (authResult: any) => {
localStorage.setItem('access_token', authResult.idToken);
this.lock.getProfile(authResult.idToken, (error: any, profile: any) => {
if (error) {
console.log(error);
return;
}
localStorage.setItem('profile', JSON.stringify(profile));
this.userProfile = profile;
this.router.navigateByUrl('/CHANGETHISTOYOURROUTE');
});
this.lock.hide();
});
}
public login() {
this.lock.show();
}
private get accessToken(): string {
return localStorage.getItem('access_token');
}
public authenticated(): boolean {
try {
var jwtHelper: JwtHelper = new JwtHelper();
var token = this.accessToken;
if (jwtHelper.isTokenExpired(token))
return false;
return true;
}
catch (err) {
return false;
}
}
public logout() {
localStorage.removeItem('profile');
localStorage.removeItem('access_token');
this.userProfile = undefined;
this.router.navigateByUrl('/home');
};
}
Make sure to go into your auth0 dashboard and select the social links you want. In your case facebook twitter and Google. Then when someone activates the widget those three will appear.
So all we have to do now is show the widget when someone clicks login,
html will show a login link. But if they are logged in it will show a bit of information about them instead.
<ul class="nav navbar-nav pull-right">
<li class="nav-item">
<a class="nav-link" (click)="auth.login()" *ngIf="!auth.authenticated()">Login / SignUp</a>
<a class="aside-toggle" href="#" role="button" aria-haspopup="true" aria-expanded="false" *ngIf="auth.authenticated()">
<span *ngIf="auth.authenticated() && auth.userProfile" class="profile-name">{{auth.userProfile.nickname}}</span>
<span *ngIf="!auth.authenticated() && !auth.userProfile" class="profile-name">Account</span>
<i class="icon-bell"></i><span class="tag tag-pill tag-danger profile-alerts">5</span>
<img *ngIf="auth.authenticated() && auth.userProfile" [src]="auth.userProfile.picture" class="img-avatar profile-picture" alt="User profile picture">
<img *ngIf="!auth.authenticated() && !auth.userProfile" src="/img/avatars/gravatar-default.png" alt="Default profile-picture">
</a>
</li>
</ul>
Let me know if anything is not clear. I would be glad to help. |
@Charles Offenbacher's answer is great for impersonating users who are not being authenticated via tokens. However, it will not work with clients side apps that use token authentication. To get user impersonation to work with apps using tokens, one has to directly set the HTTP_AUTHORIZATION header in the Impersonate Middleware. My answer basically plagiarizes Charles's answer and adds lines for manually setting said header.
class ImpersonateMiddleware(object):
def process_request(self, request):
if request.user.is_superuser and "__impersonate" in request.GET:
request.session['impersonate_id'] = int(request.GET["__impersonate"])
elif "__unimpersonate" in request.GET:
del request.session['impersonate_id']
if request.user.is_superuser and 'impersonate_id' in request.session:
request.user = User.objects.get(id=request.session['impersonate_id'])
# retrieve user's token
token = Token.objects.get(user=request.user)
# manually set authorization header to user's token as it will be set to that of the admin's (assuming the admin has one, of course)
request.META['HTTP_AUTHORIZATION'] = 'Token {0}'.format(token.key)
|
If I understand correct, you need the authentication to persist even when you close the tab and reopen it later.
Short answer :- Use window.localStorage. Create a token on a successful login attempt, save it in the localstorage and check for each time the page loads.
Long - Short Answer :- This does not involve hardcoding the user creds. Look-up for stateless authentication like JWT or Cookie based solutions. These invovle serverside token/cookie getting created, passed through your REST - service headers and persisted in the local storage of the browser. I personally like JWT because it has less overhead.
Hope that helps!
Edit :- Updating that piece of code a bit. Might have typos. Try to use UI-Router instead of windows redirect.
app.controller('loginController',['$scope','$http', function($scope,$http){
//Check if localStorage contains authToken
(window.localStorage.getItem('authToken') == 'success')? 'redirect - to - your page ' : 'Go - to - login';
$scope.adminLogin = function() {
if($scope.username=="[email protected]" && $scope.password=="admin123"){
window.localStorage.setItem('authToken', 'success');
window.location = '/PricePredictionUI/#/DASHBOARD';
}else{
$scope.message="Error";
$scope.messagecolor="alert alert-danger";
}
};
}]);
|
Is there a way I can create a relationship between my org and others in Azure AD and somehow allow/deny authorization to my app using those relationships at the AAD login?
Is there a tenantid value that comes from Azure AD that I can compare to a list I keep in SQL server to allow/deny authorization after AAD login? Is that typically a domain name or some GUID or other value I have to get from the customer?
Yes. We are able to write the custom code to verify the iss claim from token to make it meet the business logic. Here is the code using the OpenIdConnect OWIN component for your reference:
app.UseOpenIdConnectAuthentication(
new OpenIdConnectAuthenticationOptions
{
ClientId = ClientId,
Authority = Authority,
TokenValidationParameters = new System.IdentityModel.Tokens.TokenValidationParameters
{
// instead of using the default validation (validating against a single issuer value, as we do in line of business apps),
// we inject our own multitenant validation logic
ValidateIssuer = false,
},
Notifications = new OpenIdConnectAuthenticationNotifications()
{
// we use this notification for injecting our custom logic
SecurityTokenValidated = (context) =>
{
// retriever caller data from the incoming principal
string issuer = context.AuthenticationTicket.Identity.FindFirst("iss").Value;
string UPN = context.AuthenticationTicket.Identity.FindFirst(ClaimTypes.Name).Value;
string tenantID = context.AuthenticationTicket.Identity.FindFirst("http://schemas.microsoft.com/identity/claims/tenantid").Value;
if (
// the caller comes from an admin-consented, recorded issuer
(db.Tenants.FirstOrDefault(a => ((a.IssValue == issuer) && (a.AdminConsented))) == null)
// the caller is recorded in the db of users who went through the individual onboardoing
&& (db.Users.FirstOrDefault(b =>((b.UPN == UPN) && (b.TenantID == tenantID))) == null)
)
// the caller was neither from a trusted issuer or a registered user - throw to block the authentication flow
throw new SecurityTokenValidationException();
return Task.FromResult(0);
},
AuthenticationFailed = (context) =>
{
context.OwinContext.Response.Redirect("/Home/Error?message=" + context.Exception.Message);
context.HandleResponse(); // Suppress the exception
return Task.FromResult(0);
}
}
});
And here is a helpful code sample for your reference.
Have the customer register my app or somehow assign it to groups they set up in their own Azure AD so the user won't authenticate if not allowed by their admin? OR
Query the incoming user profile at my server for some value (Group Membership, Department, Manager, etc.) using Graph API and allow/deny based on that value?
Based on my understanding, it should let the custom company to manage the users who have the access to visit your application because the users management is responsibly of the partners' company. So after the partners' company decide to enable/disable users, your company and application doesn't require make and additional work or changing.
And to manage the users who can access the application, the partners' company can use the Requiring User Assignment feature. |
My answer is based on following assumptions:
You have kind of root-access to your server and you are able to enter your webserver's configuration.
You have rather low experience in web-based development (deduced from the fact, that you build your own CMS; allow_url_fopen seems a viable option to you, beside that allow_url_include is what you're actually seeking for; allow_url_include won't magically connect to the remote server and load the php-files like server-side; allow_url_include should definitely be disabled, due to security risks; allow_url_include could also greatly reduce the performance of your project; your idea of what is a webservice is derived from Google Maps API) - no intent to offend!
Let me address the issues with allow_url_fopen and allow_url_include:
allow_url_fopen and allow_url_include would require you to open a potential security risk.:
allow_url_* would require you to expose the source-code of your application via http. Based on my deduced assumption, that you're rather beginner, I imply the risk, that a potential attacker would find a way, how to formulate some hack very easy. Unfair generalization, but this is almost always the case when I review source-code of younger developers.
You will have a second, potentially hackable layer of your application, that could emit all sorts of php-code.
You have an additional chain-part, that could have downtime due to maintenance or hardware-/software-problems. This is also the case when making content accessible via a webservice (REST, etc).
allow_url_fopen and allow_url_include, or even a webservice could noticeably reduce the performance of your CMS, since all files, that are subjected to inclusion, would be streamed over network instead of the (usually much better connected) local storage.
Solution 1) The most common solution is, to configure your domain's DNS A and AAAA-Records to point all to the same server-ip and configure your webserver (eg apache, nginx/varnish) to direct all traffic to a single virtual-host. Your CMS then has to deal with requests from different origin addresses. You can deliver the appropriate content based on, whats in the super-global $_SERVER['HTTP_HOST']-variable. Beware, that this variable could be named differently, if your server-environment is behind a reverseproxy (you would know this; this is not different per visitor then).
Step1: DNS-Resolution: Your project-a.xyz points to the ip of your cms' server.
Step2: Your webserver knowns, that it has to do something with project-a.xyz
Step3: Your webserver directs the request to your CMS
Step4: Your CMS resolves the actually requested host from $_SERVER['HTTP_HOST']
Step5: Your CMS emits the content, that belongs to the requested host-name
Solution 2) If you want those projects to be physically separated as your requests suggests, you can take a look at various deployment-systems, that would enable to you push changes to multiple destinations. Your CMS should be built in a way, that treats cm-system-core-files (PHP-Files) and actual content differently and is able to update core-files without affecting content. You don't have pretty much options here. You could use SCM-Systems like Git or SVN to sync changes with remote projects easily, but this is rather discouraged.
Solution 3) You can indeed build some kind of webservice (REST is a often used technique these days). So the web-project hosted on project-a.zyx would be a rather simple thin-client, that mostly redirects requests to some rest-endpoint. You would normally also want some kind of https-based authentication here. This would require your clients to be able to request content (not actual source-code!) from another endpoint via HTTP (which is sometimes disabled on shared hosting environments), optionally make some transformation on that content and emit it. Since your requests seems to imply, that this is not the ideal option, you should really look into the first solution. |
To apply css to only one component you have to use shadow dom, however angularjs 1.5.x does not use shadow dom. But from your specification you have 2 options:
1. Load css file in router or via custom directive.
As suggested how-to-include-view-partial-specific-styling-in-angularjs you caninclude your custom css files using either custom directive or by using angular-css module (mode info about this module). I'd suggest to use the second option (angular-css module). Another example how to use angular-css module. However you will have to use specific styles to apply only to /static/templates/authentication/login.html file.
2. Load css files using module local scope.
Another way is to break you application to specific ES6 modules (don't get this mixed up with angular's modules) and include them in main ES6 module. To understand ES6 modules follow this link. And then you can use local scope with css-loader css-loader#local-scope. This option might be harder to make it work, because you will have to change your build process and split application to seperate modules, but you will get option to apply stylesheets to only one module.
To be clear the best sollution might be the first one, no need to change your build process and you can just add one NG module and you are good to go. No need to talk about that you can then load CSS files to your directives or components. But as you stated you need to apply your changes only to one particular .html file, so going for second option might more suit your needs. |
Since VM deployed with Resource Manager then state, ip address and size info under the different providers (Compute and Network). It maybe has no way to get the VM info and network info in a call currently.
With Microsoft Azure Management Client Library (Fluent), we can get the VM info (power state, machine size ,ip address). Actually, it call the REST API twice. About Azure authentication please refer to how to create an authentication file.
AzureCredentials credentials = AzureCredentials.FromFile("Full path of your AzureAuthFile");
var azure = Azure
.Configure()
.WithLogLevel(HttpLoggingDelegatingHandler.Level.BASIC)
.Authenticate(credentials)
.WithDefaultSubscription();
foreach (var virtualMachine in azure.VirtualMachines.ListByGroup("Your Resource Group Name").Where(virtualMachine => virtualMachine.ComputerName.Equals("vmName")))
{
var state = virtualMachine.PowerState;
var size = virtualMachine.Size;
var ip = virtualMachine.GetPrimaryPublicIpAddress().IpAddress; //call Rest API again
}
If it deployed under the CloudService then we can use Windows Azure management library. It is easy to get VM(Role)
Info about power state, ip address, and machine size.
var certificate = new CertificateCloudCredentials(subscriptionId, x509Certificate);
var computeManagementClient = new ComputeManagementClient(certificate);
var deployments = await computeManagementClient.Deployments.GetByNameAsync (hostedServiceName,"Your Deployment Name");
var state = deployments.RoleInstances.First().PowerState;
var ipAddress = deployments.RoleInstances.First().IPAddress;
var size = deployments.RoleInstances.First().InstanceSize;
|
Here are examples using PHP and Python 3 to accomplish what you want. They're simple starting points for making requests over Tor and changing your identity on demand.
The PHP example uses TorUtils to communicate with the controller and wrap cURL through Tor.
The Python example uses stem to communicate with the controller and Requests for sending requests over Tor's SOCKS proxy.
The examples assume you have Tor working already and the SocksPort set to 9050, and the ControlPort set to 9051 with cookie authentication working, or a controller password of password.
PHP
Set Up
Install Composer to install the TorUtils package (you can also download the zipball and extract)
Once composer is working, run composer require dapphp/torutils from your project directory to download and install dependencies
Code
<?php
use Dapphp\TorUtils\ControlClient;
use Dapphp\TorUtils\TorCurlWrapper;
require_once 'vendor/autoload.php'; // composer autoloader
// include TorUtils/src/ControlClient.php and TorUtils/src/TorCurlWrapper.php if using without composer
$controller = new ControlClient; // get a new controller object
try {
$controller->connect('127.0.0.1', 9051); // connect to Tor controller on localhost:9051
$controller->authenticate('password'); // attempt to authenticate using "password" as password
} catch (\Exception $ex) {
die("Failed to open connection to Tor controller. Reason: " . $ex->getMessage() . "\n");
}
// issue 10 requests, changing identity after each request
for ($i = 0; $i < 10; ++$i) {
try {
$curl = new TorCurlWrapper('127.0.0.1', 9050); // connect to Tor SOCKS proxy on localhost:9050
$curl->httpGet('https://drew-phillips.com/ip-info/'); // issue request
$body = strip_tags($curl->getResponseBody());
if (preg_match('/Using Tor:\s*Yes/i', $body)) {
echo "You appear to be using Tor successfully. ";
} else {
echo "Proxy worked but this Tor IP is not known. ";
}
if (preg_match('/IP Address:\s*(\d+\.\d+\.\d+\.\d+)/i', $body, $ip)) {
echo "Source IP = {$ip[1]}\n";
} else {
echo "Couldn't determine IP!\n";
}
} catch (\Exception $ex) {
echo "HTTP request failed! " . $ex->getMessage() . "\n";
}
// TODO: issue more requests as needed here
echo "\n";
sleep(10);
try {
// send signal to controller to request new identity (IP)
$controller->signal(ControlClient::SIGNAL_NEWNYM);
} catch (\Exception $ex) {
echo "Failed to issue NEWNYM signal: " . $ex->getMessage() . "\n";
}
}
Python 3
Set Up
This example uses Python 3 and assumes you have the Python interpreter up and running and have the following packages installed: requests, requests[socks], socks, urllib3, stem.
On Debian/Ubuntu: sudo -H pip3 install requests requests[socks] socks urllib3 stem
Code
#!/usr/bin/env python3
import requests
from stem.control import Controller, Signal
import time
import sys
import re
# specify Tor's SOCKS proxy for http and https requests
proxies = {
'http': 'socks5h://127.0.0.1:9050',
'https': 'socks5h://127.0.0.1:9050',
}
try:
controller = Controller.from_port(9051) # try to connect to controller at localhost:9051
except stem.SocketError as exc:
print("Unable to connect to tor on port 9051: %s" % exc)
sys.exit(1)
try:
controller.authenticate('password') # try to authenticate with password "password"
except stem.connection.PasswordAuthFailed:
print("Unable to authenticate, password is incorrect")
sys.exit(1)
# issue 10 requests, changing identity after each request
for i in range(1,10):
# issue request, passing proxies to request
r = requests.get('https://drew-phillips.com/ip-info/', proxies=proxies)
#print(r.text)
m = re.search('<dt>Using Tor:</dt><dd><span[^>]*>Yes', r.text)
if m:
print("You appear to be using Tor successfully. ", end="")
else:
print("Proxy worked but this Tor IP is not known. ", end="")
m = re.search('<dt>IP Address:</dt><dd>(\d+\.\d+\.\d+\.\d+)</dd>', r.text)
if m:
print("Source IP = %s" % m.groups(1))
else:
print("Failed to scrape IP from page")
try:
# send signal to controller to request new identity (IP)
controller.signal(Signal.NEWNYM)
except Exception as ex:
print("NEWNYM failed: %s" % ex)
time.sleep(10)
|
Okay, I've figured it out. If I use Auth0 to authenticate on the Angular side and then make an HTTP request to my Rails server, that HTTP request will have an Authorization header with a value like this:
Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJodHRwczovL2JlbmZyYW5rbGlubGFicy5hdXRoMC5jb20vIiwic3ViIjoiYXV0aDB8NTgzMDZmOTFjMDg4MTRlMDEwMTVmNDM0IiwiYXVkIjoiajNKdHpjYnNpTUkyR0JkRnZGb3FFTjM4cUtTVmI2Q0UiLCJleHAiOjE0Nzk4OTc3OTYsImlhdCI6MTQ3OTg2MTc5Nn0.2cGLY_e7jY0WL-ue4NeT39W4pdxJVSeOT5ZGd_xNmJk
The part after "Bearer", the part starting with "eyJ0", is a JSON Web Token. Henceforth I'll refer to the JSON Web Token simply as the "token".
When Rails receives the HTTP request, it can grab and then decode the token. In my case I'm using Knock.
Knock expects my User model to define a from_token_payload method. Here's what mine looks like:
class User < ApplicationRecord
def self.from_token_payload(payload)
User.find_by(auth0_id_string: payload['sub'])
end
end
My user table has an auth0_id_string column. If I manually create a user whose auth0_id_string matches what I find under sub in the decoded Auth0 token, then my from_token_payload method will find that user and Knock will give me a thumbs up for that token. If no user is found, thumbs down.
So it goes like this, roughly:
Angular asks Auth0 to authenticate a user
Auth0 sends back a JSON Web Token
Angular sends that JSON Web Token to Rails
Rails decodes that token
Rails tries to find a user that matches the data in that token
Rails sends back either a 200 or 401 depending on whether a matching user was found
There are some pieces missing but that's the gist of it. I'll probably end up writing a tutorial on Angular + Rails + Auth0 authentication since, as far as I've been able to tell, none currently exists. |
For a 24bit image, if the width of the image is 682 then it needs padding. Because 682*3 is not a multiple of 4. Try changing the image width to 680 and try again.
To pad the image rows, use the following formula:
int pad = WIDTH % 4;
if(pad == 4) pad = 0;
WIDTH += pad;
Change the condition to fb_j < HEIGHT - 1 - FILTER_HEIGHT and fb_i < WIDTH - 1 - FILTER_WIDTH to avoid buffer overflow.
The bitmap is scanned from top to bottom. It works fine when I switched the dimension as follows (but I loaded the bitmap differently)
//Pixel frame_buffer[WIDTH][HEIGHT];
//Pixel temp_buffer[WIDTH][HEIGHT];
Pixel frame_buffer[HEIGHT][WIDTH];
Pixel temp_buffer[HEIGHT][WIDTH];
...
for(int fb_j = 1; fb_j < HEIGHT - 1 - FILTER_HEIGHT; fb_j++) {
for(int fb_i = 1; fb_i < WIDTH - 1 - FILTER_WIDTH; fb_i++) {
float r = 0, g = 0, b = 0;
for(int ker_i = 0; ker_i < FILTER_WIDTH; ker_i++) {
for(int ker_j = 0; ker_j < FILTER_HEIGHT; ker_j++) {
r += ((float)(frame_buffer[fb_j + ker_j][fb_i + ker_i].r / 255.0) * emboss_kernel[ker_j][ker_i]);
g += ((float)(frame_buffer[fb_j + ker_j][fb_i + ker_i].g / 255.0) * emboss_kernel[ker_j][ker_i]);
b += ((float)(frame_buffer[fb_j + ker_j][fb_i + ker_i].b / 255.0) * emboss_kernel[ker_j][ker_i]);
}
}
if(r > 1.0) r = 1.0;
else if(r < 0) r = 0;
if(g > 1.0) g = 1.0;
else if(g < 0) g = 0;
if(b > 1.0) b = 1.0;
else if(b < 0) b = 0;
// Output buffer which will be rendered after convolution
temp_buffer[fb_j][fb_i].r = (GLubyte)(r*255.0);
temp_buffer[fb_j][fb_i].g = (GLubyte)(g*255.0);
temp_buffer[fb_j][fb_i].b = (GLubyte)(b*255.0);
}
}
Also try running a direct copy for testing. Example:
temp_buffer[fb_j][fb_i].r = frame_buffer[fb_j][fb_i].r;
temp_buffer[fb_j][fb_i].g = frame_buffer[fb_j][fb_i].g;
temp_buffer[fb_j][fb_i].b = frame_buffer[fb_j][fb_i].b;
|
With a native application the flow is similar to what you described for the web application.
The Auth0 Mobile + API architecture scenario describes what should happen when you need to authenticate a user for a mobile application and then later access an API on behalf of that user.
Summary
you will continue to use the authorization code grant;
if the authorization server in question supports it you should use the PKCE (Proof Key for Code Exchange by OAuth Public Clients) for added security;
you will need to select how you will receive the code in the native application; you can use a custom scheme com.myinstaapp:, a local web server with the http: scheme or a few other options; (see this answer on OAuth redirect URI for native application for other alternatives)
you exchange the code obtained by the native application with an access token in a similar way to what you would do for a web application; (except for the use of client secrets which are in general not useful for native applications as they would be easily leaked)
Additional Information
The flow described in the Auth0 scenario assumes that authentication will happen through an OpenID Connect compliant flow and in addition you'll get the access token as specified by OAuth2. I'm not overly familiar with Instagram so if they only support OAuth2 that part is of course not applicable. |
This is tough to handle because you want to dynamically fill in some key/value pairs in a hash that is an element of an array which is a parameter value.
Option 1: Build the hash outside of the resource declaration
$auth_settings = {
auth_type => 'Basic',
auth_name => "$name",
auth_user_file => '/somefile.pwd',
auth_require => 'valid-user',
}
$base_dir1 = {path => $document_root,
rewrites => [
{
comment => "rule1",
rewrite_base => "/",
rewrite_rule => ['^index\.html$ - [L]']
},
{
comment => "rule2",
rewrite_cond => ['%{REQUEST_FILENAME} !-f', '%{REQUEST_FILENAME} !-d'],
rewrite_rule => ['. /index.html [L]']
}
]
}
if $use_authentication {
$real_dir1 = merge($base_dir1, $auth_settings)
}
else {
$real_dir1 = $base_dir1
}
apache::vhost { "$name-non-ssl":
servername => $vhost_name,
docroot => $document_root,
port => 80,
access_log_file => 'access.log',
access_log_format => 'vhost_common',
error_log_file => 'error.log',
directories => [ $real_dir1 ],
}
Granted, it goes a little wild with the variables.
Option 2: Create a custom function
Write a function that takes what is $base_dir1 above and the boolean value for $use_authentication, and returns the merged hash if appropriate.
apache::vhost { "$name-non-ssl":
servername => $vhost_name,
docroot => $document_root,
port => 80,
access_log_file => 'access.log',
access_log_format => 'vhost_common',
error_log_file => 'error.log',
directories => [ add_auth($use_authentication, { ... }) ],
}
Option 3: Inline that
You can nonchalantly do the merge right in the resource declaration. Use a selector to decide what to merge. Readability is out the window with this one.
apache::vhost { "$name-non-ssl":
servername => $vhost_name,
docroot => $document_root,
port => 80,
access_log_file => 'access.log',
access_log_format => 'vhost_common',
error_log_file => 'error.log',
directories => [ merge({path => $document_root,
rewrites => [
{
comment => "rule1",
rewrite_base => "/",
rewrite_rule => ['^index\.html$ - [L]']
},
{
comment => "rule2",
rewrite_cond => ['%{REQUEST_FILENAME} !-f', '%{REQUEST_FILENAME} !-d'],
rewrite_rule => ['. /index.html [L]']
}
]
}, $use_authentication ? {
true => {
auth_type => 'Basic',
auth_name => "$name",
auth_user_file => '/somefile.pwd',
auth_require => 'valid-user',
},
default => {}
}
)
],
}
I didn't bother testing this monster. Not even sure the braces line up.
You might get away with a compromise between (1) and (3), but please lean towards the former. |
Thanks to mlk comment, i did it : i have implemented my own class, inspired by OcspClientBouncyCastle code. The code is, indeed, trivial. My class manage caching : it sends only one OCSP request. This is the good way to do the stuff.
Sample code :
// Once instanciated, this class fires one and only one OCSP request : it keeps the first result in memory.
// You may want to cache this object ; ie with MemoryCache.
public class MyOcspClientBouncyCastleSingleRequest : IOcspClient
{
private static readonly ILogger LOGGER = LoggerFactory.GetLogger(typeof(OcspClientBouncyCastle));
private readonly OcspVerifier verifier;
// The request-result
private Dictionary<String, BasicOcspResp> _cachedOcspResponse = new Dictionary<string, BasicOcspResp>();
/**
* Create default implemention of {@code OcspClient}.
* Note, if you use this constructor, OCSP response will not be verified.
*/
[Obsolete]
public MyOcspClientBouncyCastleSingleRequest()
{
verifier = null;
}
/**
* Create {@code OcspClient}
* @param verifier will be used for response verification. {@see OCSPVerifier}.
*/
public MyOcspClientBouncyCastleSingleRequest(OcspVerifier verifier)
{
this.verifier = verifier;
}
/**
* Gets OCSP response. If {@see OCSPVerifier} was set, the response will be checked.
*/
public virtual BasicOcspResp GetBasicOCSPResp(X509Certificate checkCert, X509Certificate rootCert, String url)
{
String dicKey = checkCert.SubjectDN.ToString() + "-" + rootCert.SubjectDN.ToString() + "-" + url;
if (_cachedOcspResponse != null && _cachedOcspResponse.Count > 0 && _cachedOcspResponse.ContainsKey(dicKey))
{
BasicOcspResp cachedResult = _cachedOcspResponse[dicKey];
return cachedResult;
}
else
{
try
{
OcspResp ocspResponse = GetOcspResponse(checkCert, rootCert, url);
if (ocspResponse == null)
{
_cachedOcspResponse.Add(dicKey, null);
return null;
}
if (ocspResponse.Status != OcspRespStatus.Successful)
{
_cachedOcspResponse.Add(dicKey, null);
return null;
}
BasicOcspResp basicResponse = (BasicOcspResp)ocspResponse.GetResponseObject();
if (verifier != null)
{
verifier.IsValidResponse(basicResponse, rootCert);
}
_cachedOcspResponse.Add(dicKey, basicResponse);
return basicResponse;
}
catch (Exception ex)
{
if (LOGGER.IsLogging(Level.ERROR))
LOGGER.Error(ex.Message);
}
return null;
}
}
/**
* Gets an encoded byte array with OCSP validation. The method should not throw an exception.
*
* @param checkCert to certificate to check
* @param rootCert the parent certificate
* @param url to get the verification. It it's null it will be taken
* from the check cert or from other implementation specific source
* @return a byte array with the validation or null if the validation could not be obtained
*/
public byte[] GetEncoded(X509Certificate checkCert, X509Certificate rootCert, String url)
{
try
{
BasicOcspResp basicResponse = GetBasicOCSPResp(checkCert, rootCert, url);
if (basicResponse != null)
{
SingleResp[] responses = basicResponse.Responses;
if (responses.Length == 1)
{
SingleResp resp = responses[0];
Object status = resp.GetCertStatus();
if (status == CertificateStatus.Good)
{
return basicResponse.GetEncoded();
}
else if (status is RevokedStatus)
{
throw new IOException(MessageLocalization.GetComposedMessage("ocsp.status.is.revoked"));
}
else
{
throw new IOException(MessageLocalization.GetComposedMessage("ocsp.status.is.unknown"));
}
}
}
}
catch (Exception ex)
{
if (LOGGER.IsLogging(Level.ERROR))
LOGGER.Error(ex.Message);
}
return null;
}
/**
* Generates an OCSP request using BouncyCastle.
* @param issuerCert certificate of the issues
* @param serialNumber serial number
* @return an OCSP request
* @throws OCSPException
* @throws IOException
*/
private static OcspReq GenerateOCSPRequest(X509Certificate issuerCert, BigInteger serialNumber)
{
// Generate the id for the certificate we are looking for
CertificateID id = new CertificateID(CertificateID.HashSha1, issuerCert, serialNumber);
// basic request generation with nonce
OcspReqGenerator gen = new OcspReqGenerator();
gen.AddRequest(id);
// create details for nonce extension
IDictionary extensions = new Hashtable();
extensions[OcspObjectIdentifiers.PkixOcspNonce] = new X509Extension(false, new DerOctetString(new DerOctetString(PdfEncryption.CreateDocumentId()).GetEncoded()));
gen.SetRequestExtensions(new X509Extensions(extensions));
return gen.Generate();
}
private OcspResp GetOcspResponse(X509Certificate checkCert, X509Certificate rootCert, String url)
{
if (checkCert == null || rootCert == null)
return null;
if (url == null)
{
url = CertificateUtil.GetOCSPURL(checkCert);
}
if (url == null)
return null;
LOGGER.Info("Getting OCSP from " + url);
OcspReq request = GenerateOCSPRequest(rootCert, checkCert.SerialNumber);
byte[] array = request.GetEncoded();
HttpWebRequest con = (HttpWebRequest)WebRequest.Create(url);
con.ContentLength = array.Length;
con.ContentType = "application/ocsp-request";
con.Accept = "application/ocsp-response";
con.Method = "POST";
Stream outp = con.GetRequestStream();
outp.Write(array, 0, array.Length);
outp.Close();
HttpWebResponse response = (HttpWebResponse)con.GetResponse();
if (response.StatusCode != HttpStatusCode.OK)
throw new IOException(MessageLocalization.GetComposedMessage("invalid.http.response.1", (int)response.StatusCode));
Stream inp = response.GetResponseStream();
OcspResp ocspResponse = new OcspResp(inp);
inp.Close();
response.Close();
return ocspResponse;
}
|
UploadString is a POST method in the first example. In the second example a GET method is being done.
static async Task<string> GetAuthenticationTokenAsync() {
string token = string.Empty;
var clientId = ConfigurationManager.AppSettings["AuthNClientId"];
var uri = ConfigurationManager.AppSettings["AuthNUri"];
var userName = ConfigurationManager.AppSettings["AuthNUserName"];
var password = ConfigurationManager.AppSettings["AuthNPassword"];
var client = new HttpClient();
client.BaseAddress = new Uri(uri);
client.DefaultRequestHeaders.Accept.Clear();
var nameValueCollection = new Distionary<string, string>() {
{ "client_id", clientId },
{ "grant_type", "password" },
{ "username", userName },
{ "password", password },
};
var content = new FormUrlEncodedContent(nameValueCollection);
var response = await client.PostAsync("", content);
if (response.IsSuccessStatusCode) {
Console.WriteLine("success");
var json = await response.Content.ReadAsStringAsync();
dynamic authResult = JsonConvert.DeserializeObject(json);
token = authResult.access_token;
}
else { Console.WriteLine($"failure: {response.StatusCode}"); }
return token;
}
|
This is a classic problem of distributed session storage.
First of all, the concepts of "session" (session ids and cookies) combined with "stateless" is kind of a contradiction.
OAuth2 is supposed to be a "stateless" delegated authorization framework provided you persist the initial input request (including redirect url) at the server side before generating the access code.
Leaking those details to cookies before receiving credentials could be exposing you to security exploits. You could mitigate the risk by making sure that the cookie is HttpOnly (not accessible by JS) and secure (released over httpS only), but I would not recommend that approach anyways.
About your other point: Spring Security’s remember-me feature is designed to carry a reference to the authentication credentials only, not the details on the initial auth2 requests. Moreover, the persisted options (PersistentTokenBasedRememberMeServices) only supports memory (single node) and jdbc flavors by default.
Adjusting those for your needs will required considerable changes. Doable but requires a lot of effort.
In my experience, there are two alternatives that comes to mind:
Configure sticky sessions using a front-load balancer (e.g: haproxy, nginx, F5, etc…). The user session will be tied to the node where the credentials where submitted.
The implication is that if that node goes down; the user will have to re-authenticate to create new access tokens, but the access tokens already given should be fine if used against other nodes.
Configure/implement a transparent distributed web session storage.
Some distributed memory storage providers (e.g.: hazelcast ) offer plugins that be configured to the application servers to make this transparent for the user. There is some added overhead implied on this, but there is almost no additional code needed to satisfy your requirement.
|
As specified, render is looking for templates directory relative to the current working directory, i.e, from where you're trying to execute the program.
A few different ways to fix it:
Execute from blogpy directory:
python bin/blog.py
This will correctly find templates directory in blogpy/templates.
Execute from bin directory, but have render look in ../templates. This would require a small change to your code where you specify the location of the templates subdir. (This is what I suggest for your example).
Execute from bin directory, but move the templates directory to be under the bin directory. This may look odd, but if you're looking for a simple solution, there's nothing wrong with it.
Use fully specified path names. Ultimately, this is what you should be doing, though it's overkill for your example. You'll save time in the future having a block like:
=== repositories.py ===
import os
_dir_name = os.path.dirname(__file__) or os.getcwd()
templates = os.path.normpath(os.path.join(_dir_name, "../templates"))
You'll be able to locate all your subdirectories (for static content, maybe encryption keys, etc.) reliably regardless how the code is invoked.
|
I accepted the above answer because it appears to be correct, however, I actually implemented it differently...
describe("Login Component", () => {
let component: LoginComponent;
let authService: AuthenticationService;
let router: Router;
describe("Testing the subscription happens", () => {
beforeEach(() => {
TestBed.configureTestingModule({imports: [RouterTestingModule]});
router = TestBed.get(Router);
authService = new AuthenticationService();
authService.notifications = new Subject();
authService.notifications.subscribe = jasmine.createSpy("SpyToTestNotifications");
});
it("Make sure we try to subscribe to the auth event", () => {
component = new LoginComponent(authService, router);
expect(authService.notifications.subscribe).toHaveBeenCalled();
})
});
});
As you can see this only requires 2 lines in the beforeEach...
TestBed.configureTestingModule({imports: [RouterTestingModule]});
router = TestBed.get(Router);
However, per @jonrsharpe this does a lot of things so you can't guarantee what other side effects might happen. But it is quick, it is dirty and it does seem to "work" |
Application_AuthenticateRequest is only called when you request a new resource.
In your case, you are still in the request which creates FormsAuthenticationTicket. As the result, Principal object hasn't been assigned to the current thread yet.
If you want to retrieve IPrincipal from the current thread, you will need to assign it explicitly.
var encryptedTicket = FormsAuthentication.Encrypt(ticket);
var cookie = new HttpCookie(FormsAuthentication.FormsCookieName, encryptedTicket);
// You need this two lines.
HttpContext.Current.User = new GenericPrincipal(id, roles);
Thread.CurrentPrincipal = HttpContext.Current.User;
....
Also make sure you have those two line inside Application_AuthenticateRequest.
protected void Application_AuthenticateRequest(Object sender, EventArgs e)
{
...
HttpContext.Current.User = new GenericPrincipal(id, roles);
Thread.CurrentPrincipal = HttpContext.Current.User; <-- Do not forget this.
...
}
FYI: You do not need AuthorizeAttribute on private method. You only want it on Controller or Action Methods.
[Authorize(Roles = "Administrator, User")] <-- This is not needed.
private void CreateTicket(string id, string role)
{
...
}
|
You can use ssh-agent. The man-page says :
ssh-agent is a program to hold private keys used for public key
authenti‐
cation (RSA, DSA, ECDSA, Ed25519). ssh-agent is usually started in the
beginning of an X-session or a login session, and all other windows or
programs are started as clients to the ssh-agent program. Through use of
environment variables the agent can be located and automatically used for
authentication when logging in to other machines using ssh(1).
On further reading you can see :
The agent initially does not have any private keys. Keys are added
using
ssh-add(1). When executed without arguments, ssh-add(1) adds the files
~/.ssh/id_rsa, ~/.ssh/id_dsa, ~/.ssh/id_ecdsa, ~/.ssh/id_ed25519 and
~/.ssh/identity. If the identity has a passphrase, ssh-add(1) asks for
the passphrase on the terminal if it has one or from a small X11 program
if running under X11. If neither of these is the case then the authenti‐
cation will fail. It then sends the identity to the agent. Several
identities can be stored in the agent; the agent can automatically use
any of these identities. ssh-add -l displays the identities currently
held by the agent.
|
After doing some tests on the JSON link you provided, turns out your web server does inject HTML in front of the actual content. This HTML is meant to set a cookie via JavaScript and then automatically load the actual data.
The problem is not with your JSON, but your web framework. You'll see below that the page loads aes.js from your root directory, which is most likely used for the decryption/encryption of the cookie.
Here is the HTML returned:
<html>
<body>
<script type="text/javascript" src="/aes.js"></script>
<script>
function toNumbers(d) {
var e = [];
d.replace(/(..)/g, function(d) {
e.push(parseInt(d, 16))
});
return e
}
function toHex() {
for (var d = [], d = 1 == arguments.length && arguments[0].constructor == Array ? arguments[0] : arguments, e = "", f = 0; f < d.length; f++) e += (16 > d[f] ? "0" : "") + d[f].toString(16);
return e.toLowerCase()
}
var a = toNumbers("f655ba9d09a112d4968c63579db590b4"),
b = toNumbers("98344c2eee86c3994890592585b49f80"),
c = toNumbers("db90ed280d6dd97b2c5a2f1352115adf");
document.cookie = "__test=" + toHex(slowAES.decrypt(c, 2, a, b)) + "; expires=Thu, 31-Dec-37 23:55:55 GMT; path=/";
location.href = "http://tradersdb.rf.gd/service1.php?i=1";
</script>
<noscript>
This site requires Javascript to work, please enable Javascript in your browser or use a browser with Javascript support
</noscript>
</body>
</html>
It's most likely a bot protection plugin on your server. If you have control over your server plugins, you should disable / remove it. The only other options are to either find a better host which allows plain content access or find a way to disable this server setting.
Update:
It would appear this is a free hosting server. I would strongly suggest you move off this and you will find that your code works. If it's a free host, you probably can't disable it. In that case, this host provider is useless for you and you should search for a new one. |
Hope you're doing more than great!.
I'm afraid Encryption is not a problem that's being currently worked on by our team.
Although class diagrams aren't really uploaded anywhere, there are several automatic-generation-tools, such as https://www.visual-paradigm.com/solution/freeumltool/, which would dynamically render those for you.
See below!
The way Simperium works is quite interesting. Local database entities keep a copy of the last known remote state in a field which we call Ghost.
Whenever a local change is performed, the library will calculate the diff between the last known remote state (AKA Ghost), and the local state. This change is enqueued, and sent whenever possible.
Now, here is where it gets extra tricky. The backend is considered the canonical repository // Master, and the clients actually perform Change Requests. It's up to the backend either to accept or reject the change.
This is analog to the way GIT works, in some sense. If you attempt to push a local change, after the remote branch diverged, you will get an error, and a rebase / merge will need to take place.
By design, i'm afraid that Simperium's backend needs to be able to apply the diff posted by the client into it's local database. Implementing encryption would require rethinking the way the protocol works, and patching the backend as well.
If you'd like to further discuss this, please, feel free to mail me directly at jorge.perez (at) automattic -dot- com, or poke me over the WordPress.org Slack. Would be more than happy to walk you through the architecture.
Thank you for your interest in Simperium / Simplenote! |
If I understand correctly, that should be fairly easy. The Forms Auth module is only about issuing a cookie for the local app and maintaining the session for currently logged user. It doesn't matter then how you obtain the authentication, by validating the username/password in your app or by accepting a SAML token.
Technically, remove both SAM and FAM modules from the pipeline. Make Forms your authentication method again so you have a regular forms auth based app.
Then, just add a code in your login endpoint that optionally accepts incoming SAML token from ADFS or any other STS. You could follow my tutorial
http://www.wiktorzychla.com/2014/11/simplest-saml11-federated-authentication.html
The critical part of the code is surprisingly straightforward
var securityToken = fam.GetSecurityToken( request );
var config = new SecurityTokenHandlerConfiguration
{
CertificateValidator = X509CertificateValidator.None,
IssuerNameRegistry = new CustomIssuerNameRegistry()
};
config.AudienceRestriction.AudienceMode = AudienceUriMode.Never;
var tokenHandler = new SamlSecurityTokenHandler
{
CertificateValidator = X509CertificateValidator.None,
Configuration = config
};
// validate the token and get the ClaimsIdentity out of it
var identity = tokenHandler.ValidateToken( securityToken );
As soon as you have the identity of the user from the incoming token, use Forms Auth to issue the very same cookie you would issue in a regular forms auth based app (note that in my tutorial I am issuing a SAM cookie here which is just another possibility you don't want to follow here since you insist specifically on forms cookies). |
So I had a similar problem. I came here and saw various answers but with some experimentation here is how I got it work with sshkeys with passphrase, ssh-agent and cron.
First off, my ssh setup uses the following script in my bash init script.
# JFD Added this for ssh
SSH_ENV=$HOME/.ssh/environment
# start the ssh-agent
function start_agent {
echo "Initializing new SSH agent..."
# spawn ssh-agent
/usr/bin/ssh-agent | sed 's/^echo/#echo/' > "${SSH_ENV}"
echo succeeded
chmod 600 "${SSH_ENV}"
. "${SSH_ENV}" > /dev/null
/usr/bin/ssh-add
}
if [ -f "${SSH_ENV}" ]; then
. "${SSH_ENV}" > /dev/null
ps -ef | grep ${SSH_AGENT_PID} | grep ssh-agent$ > /dev/null || {
start_agent;
}
else
start_agent;
fi
When I login, I enter my passphrase once and then from then on it will use ssh-agent to authenticate me automatically.
The ssh-agent details are kept in .ssh/environment. Here is what that script will look like:
SSH_AUTH_SOCK=/tmp/ssh-v3Tbd2Hjw3n9/agent.2089; export SSH_AUTH_SOCK;
SSH_AGENT_PID=2091; export SSH_AGENT_PID;
#echo Agent pid 2091;
Regarding cron, you can setup a job as a regular user in various ways.
If you run crontab -e as root user it will setup a root user cron. If you run as crontab -u davis -e it will add a cron job as userid davis. Likewise, if you run as user davis and do crontab -e it will create a cron job which runs as userid davis. This can be verified with the following entry:
30 * * * * /usr/bin/whoami
This will mail the result of whoami every 30 minutes to user davis. (I did a crontabe -e as user davis.)
If you try to see what keys are used as user davis, do this:
36 * * * * /usr/bin/ssh-add -l
It will fail, the log sent by mail will say
To: [email protected]
Subject: Cron <davis@hostyyy> /usr/bin/ssh-add -l
Could not open a connection to your authentication agent.
The solution is to source the env script for ssh-agent above. Here is the resulting cron entry:
55 10 * * * . /home/davis/.ssh/environment; /home/davis/bin/domythingwhichusesgit.sh
This will run the script at 10:55. Notice the leading . in the script. It says to run this script in my environment similar to what is in the .bash init script. |
So in a word "yes". Everything I asked in my question was yes.
I can connect to the db like shown in the sequelize documentation. I can also configure the config.json file to have a "postgres" configuration with my user and db name. I could also place the full path in the services/index.js file when creating the new sequelize object. The best way to check that there is a connection is to have the following code after creating the new sequelize object:
new_sequelize_object
.authenticate()
.then(function(err) {
console.log('Connection to the DB has been established successfully.');
})
.catch(function (err) {
console.log('Unable to connect to the database:', err);
});
(taken from: http://docs.sequelizejs.com/en/latest/docs/getting-started/)
one can also define several sequelize objects and set them in the app. Then when defining the model in the specific service's index.js file, place the new bound name in the app.get('new_sequelize_object').
Here is the services/index.js file with two databases defined:
'use strict';
const service1 = require('./service1');
const authentication = require('./authentication');
const user = require('./user');
const Sequelize = require('sequelize');
module.exports = function() {
const app = this;
const sequelize = new Sequelize('feathers_db1', 'u1', 'upw', {
host: 'localhost',
port: 5432,
dialect: 'postgres',
logging: false
});
const sequelize2 = new Sequelize('postgres://u1:upw@localhost:5432/feathers_db2', {
dialect: 'postgres',
logging: false
});
app.set('sequelize', sequelize);
app.set('sequelize2', sequelize2);
sequelize
.authenticate()
.then(function(err) {
console.log('Connection to sequelize has been established successfully.');
})
.catch(function (err) {
console.log('Unable to connect to the database:', err);
});
sequelize2
.authenticate()
.then(function(err) {
console.log('Connection has been established to sequelize2 successfully.');
})
.catch(function (err) {
console.log('Unable to connect to the database:', err);
});
app.configure(authentication);
app.configure(user);
app.configure(service1);
};
And here is the service1/index.js file that uses service sequelize2:
'use strict';
const service = require('feathers-sequelize');
const service1 = require('./service1-model');
const hooks = require('./hooks');
module.exports = function(){
const app = this;
const options = {
//Here is where one sets the name of the differeng sequelize objects
Model: service1(app.get('sequelize2')),
paginate: {
default: 5,
max: 25
}
};
// Initialize our service with any options it requires
app.use('/service1', service(options));
// Get our initialize service to that we can bind hooks
const service1Service = app.service('/service1');
// Set up our before hooks
service1Service.before(hooks.before);
// Set up our after hooks
service1Service.after(hooks.after);
};
|
I recently undertook this project in C. The code below does the following:
1) Gets the current orientation of the image.
2) Removes all data contained in APP1 (Exif data) and APP2 (Flashpix data) by blanking.
3) Recreates the APP1 orientation marker and sets it to the original value.
4) Finds the first EOI marker (End of Image) and truncates the file if nessasary.
Some things to note first are:
1) This program is used for my Nikon camera. Nikon's JPEG format adds somthing to the very end of each file it creates. They encode this data on to the end of the image file by creating a second EOI marker. Normally image programs read up to the first EOI marker found. Nikon has information after this which my program truncates.
2) Because this is for Nikon format, it assumes big endian byte order. If your image file uses little endian, some adjustments need to be made.
3) When trying to use ImageMagick to strip exif data, I noticed that I ended up with a larger file than what I started with. This leads me to believe that Imagemagick is encoding the data you want stripped away, and is storing it somewhere else in the file. Call me old fashioned, but when I remove something from a file, I want a file size the be smaller if not the same size. Any other results suggest data mining.
And here is the code:
#include <stdio.h>
#include <stdlib.h>
#include <libgen.h>
#include <string.h>
#include <errno.h>
// Declare constants.
#define COMMAND_SIZE 500
#define RETURN_SUCCESS 1
#define RETURN_FAILURE 0
#define WORD_SIZE 15
int check_file_jpg (void);
int check_file_path (char *file);
int get_marker (void);
char * ltoa (long num);
void process_image (char *file);
// Declare global variables.
FILE *fp;
int orientation;
char *program_name;
int main (int argc, char *argv[])
{
// Set program name for error reporting.
program_name = basename(argv[0]);
// Check for at least one argument.
if(argc < 2)
{
fprintf(stderr, "usage: %s IMAGE_FILE...\n", program_name);
exit(EXIT_FAILURE);
}
// Process all arguments.
for(int x = 1; x < argc; x++)
process_image(argv[x]);
exit(EXIT_SUCCESS);
}
void process_image (char *file)
{
char command[COMMAND_SIZE + 1];
// Check that file exists.
if(check_file_path(file) == RETURN_FAILURE)
return;
// Check that file is an actual JPEG file.
if(check_file_jpg() == RETURN_FAILURE)
{
fclose(fp);
return;
}
// Jump to orientation marker and store value.
fseek(fp, 55, SEEK_SET);
orientation = fgetc(fp);
// Recreate the APP1 marker with just the orientation tag listed.
fseek(fp, 21, SEEK_SET);
fputc(1, fp);
fputc(1, fp);
fputc(18, fp);
fputc(0, fp);
fputc(3, fp);
fputc(0, fp);
fputc(0, fp);
fputc(0, fp);
fputc(1, fp);
fputc(0, fp);
fputc(orientation, fp);
// Blank the rest of the APP1 marker with '\0'.
for(int x = 0; x < 65506; x++)
fputc(0, fp);
// Blank the second APP1 marker with '\0'.
fseek(fp, 4, SEEK_CUR);
for(int x = 0; x < 2044; x++)
fputc(0, fp);
// Blank the APP2 marker with '\0'.
fseek(fp, 4, SEEK_CUR);
for(int x = 0; x < 4092; x++)
fputc(0, fp);
// Jump the the SOS marker.
fseek(fp, 72255, SEEK_SET);
while(1)
{
// Truncate the file once the first EOI marker is found.
if(fgetc(fp) == 255 && fgetc(fp) == 217)
{
strcpy(command, "truncate -s ");
strcat(command, ltoa(ftell(fp)));
strcat(command, " ");
strcat(command, file);
fclose(fp);
system(command);
break;
}
}
}
int get_marker (void)
{
int c;
// Check to make sure marker starts with 0xFF.
if((c = fgetc(fp)) != 0xFF)
{
fprintf(stderr, "%s: get_marker: invalid marker start (should be FF, is %2X)\n", program_name, c);
return(RETURN_FAILURE);
}
// Return the next character.
return(fgetc(fp));
}
int check_file_jpg (void)
{
// Check if marker is 0xD8.
if(get_marker() != 0xD8)
{
fprintf(stderr, "%s: check_file_jpg: not a valid jpeg image\n", program_name);
return(RETURN_FAILURE);
}
return(RETURN_SUCCESS);
}
int check_file_path (char *file)
{
// Open file.
if((fp = fopen(file, "rb+")) == NULL)
{
fprintf(stderr, "%s: check_file_path: fopen failed (%s) (%s)\n", program_name, strerror(errno), file);
return(RETURN_FAILURE);
}
return(RETURN_SUCCESS);
}
char * ltoa (long num)
{
// Declare variables.
int ret;
int x = 1;
int y = 0;
static char temp[WORD_SIZE + 1];
static char word[WORD_SIZE + 1];
// Stop buffer overflow.
temp[0] = '\0';
// Keep processing until value is zero.
while(num > 0)
{
ret = num % 10;
temp[x++] = 48 + ret;
num /= 10;
}
// Reverse the word.
while(y < x)
{
word[y] = temp[x - y - 1];
y++;
}
return word;
}
Hope this helps someone! |
You could use ClaimTransformation, I just got it working this afternoon using the article and code below. I am accessing an application with Window Authentication and then adding claims based on permissions stored in a SQL Database. This is a good article that should help you.
https://github.com/aspnet/Security/issues/863
In summary ...
services.AddScoped<IClaimsTransformer, ClaimsTransformer>();
app.UseClaimsTransformation(async (context) =>
{
IClaimsTransformer transformer = context.Context.RequestServices.GetRequiredService<IClaimsTransformer>();
return await transformer.TransformAsync(context);
});
public class ClaimsTransformer : IClaimsTransformer
{
private readonly DbContext _context;
public ClaimsTransformer(DbContext dbContext)
{
_context = dbContext;
}
public async Task<ClaimsPrincipal> TransformAsync(ClaimsTransformationContext context)
{
System.Security.Principal.WindowsIdentity windowsIdentity = null;
foreach (var i in context.Principal.Identities)
{
//windows token
if (i.GetType() == typeof(System.Security.Principal.WindowsIdentity))
{
windowsIdentity = (System.Security.Principal.WindowsIdentity)i;
}
}
if (windowsIdentity != null)
{
//find user in database by username
var username = windowsIdentity.Name.Remove(0, 6);
var appUser = _context.User.FirstOrDefault(m => m.Username == username);
if (appUser != null)
{
((ClaimsIdentity)context.Principal.Identity).AddClaim(new Claim("Id", Convert.ToString(appUser.Id)));
/*//add all claims from security profile
foreach (var p in appUser.Id)
{
((ClaimsIdentity)context.Principal.Identity).AddClaim(new Claim(p.Permission, "true"));
}*/
}
}
return await System.Threading.Tasks.Task.FromResult(context.Principal);
}
}
|
Disclosure: I work at Auth0.
Tokens! Tokens! Tokens!
The most widespread approach to authenticate users in a Web API is through the use of token-based authentication. The procedure can be reduced to these steps:
The client application includes a token in the request (Authorization header).
The Web API validates the token and, if valid, processes the request in accordance to the information associated with the token.
This type of token is usually referred as a bearer token, because the only thing that an application has to to get access to an API protected resource is provide the token. The use of HTTPS with this type of authentication is vital in order to ensure that the token cannot be easily captured by an attacker when traveling from client to server.
The token can be classified further either as:
by-value token - associated information is contained in the token itself
by-reference token - associated information is kept on server-side storage that is then found using the token value as the key
A popular format used for by-value token is the JWT format (Get Started with JSON Web Tokens) given it's encoded in a Web friendly way and also has a fairly concise representation in order to reduce overhead on the wire.
Choosing between by-value or by-reference token is a matter of considering the pros and cons of each approach and review any specific requirements you may have. If you go with JWT, check jwt.io for reference on libraries supporting this format across a wide range of technologies.
How does my application get the tokens in the first place?
Setting up your API to authenticate users with tokens can be seen as the easiest part, although the need to think about all the usual security precautions still applies.
The biggest issue with token-based authentication system, is putting in place a system that can issue tokens to your different client applications that may use different technologies or be in completely different platforms.
The answer to this, as mentioned on another answers, is to rely on OAuth 2.0 and the OpenID Connect protocols and do one of the following:
Implement an identity provider/authorization server system compliant with the mentioned protocols
⤷ time consuming and complex, but you're following standards so you're less likely to mess up and you'll also gain interoperability
Delegate the authentication to a third-party authentication provider like Auth0
⤷ easy to get started, depending on amount of usage (the free plan on Auth0 goes up to 7000 users) it will cost you money instead of time
|
Always use tested libraries for such purposes. Your encryption is vulnerable and completely insecure because you're not using IV correctly.
Consider using defuse/php-encryption library and get rid of what you've done.
Why is what you've done wrong:
The same IV (initialization vector) is used.
There is no salt in encryption, it's called Initialization Vector and it must be different every time you encrypt - your IV is always the same
When encryption is done, you must deliver the encrypted data and IV - you are not returning IV with encryption result, only the result.
Currently, you are not doing what I outlined and that's why you should invest your time into using a library that takes care of encryption so you don't roll out your own, insecure implementation. I'm deliberately not posting the code required for this encryption to work from fear that someone will use it, instead of library that I linked. Always use libraries made by other people if you have no idea what you're doing. |
You are getting null with:
SecurityContextHolder.getContext().getAuthentication()
because you are not authenticating within you security configuration.
You can add a simple:
.authenticated()
.and()
// ...
.formLogin();
in case you're using form login.
Now after you'll authenticate each request you suppose to get something other than null.
Here's an example from Spring Security docs:
protected void configure(HttpSecurity http) throws Exception {
http
.authorizeRequests()
.antMatchers("/resources/**", "/signup", "/about").permitAll()
.antMatchers("/admin/**").hasRole("ADMIN")
.antMatchers("/db/**").access("hasRole('ADMIN') and hasRole('DBA')")
.anyRequest().authenticated()
.and()
// ...
.formLogin();
}
|
With icalendar
Add this gem to your Gemfile
gem 'mail'
gem 'icalendar'
You must config mail gem inside config/enviroment.rb for example for RoR >= 4.2
# Load the Rails application.
require File.expand_path('../application', __FILE__)
# Initialize the Rails application.
Rails.application.initialize!
# Initialize sendgrid
ActionMailer::Base.smtp_settings = {
:user_name => 'username',
:password => 'password',
:domain => 'something.com',
:address => 'smtp.something.com',
:port => 587,
:authentication => :plain,
:enable_starttls_auto => true
}
User model
has_may :calendar_events
Fields
fullname
mail
CalendarEvent model
belongs_to :user
Fields
title
description
start_time
end_time
user_id
app/mailers/mail_notifier.rb
class MailNotifier < ActionMailer::Base
default from: '[email protected]'
def send_calendar_event(calendar_event, organizer)
@cal = Icalendar::Calendar.new
@cal.event do |e|
e.dtstart = calendar_event.start_time
e.dtend = calendar_event.end_time
e.summary = calendar_event.title
e.organizer = "mailto:#{organizer.mail}"
e.organizer = Icalendar::Values::CalAddress.new("mailto:#{organizer.mail}", cn: organizer.fullname)
e.description = calendar_event.description
end
mail.attachments['calendar_event.ics'] = { mime_type: 'text/calendar', content: @cal.to_ical }
mail(to: calendar_event.user.mail,
subject: "[SUB] #{calendar_event.description} from #{l(calendar_event.start_time, format: :default)}")
end
end
Now you can call MailNotifier from controller with the following code
MailNotifier.send_calendar_event(@calendar_event, organizer_user).deliver
|
One answer: No, because security issues are never irrelevant in the kernel.
memcpy() in particular is a bad function to use because the third argument is a signed integer. If the user can in any way influence the value of this third parameter, you open yourself up to serious liability issues if someone attempts to copy a negative number of bytes.
Many a serious buffer overflow bugs have been due to the signed-ness of memcpy()
Another answer: No, because copy_to_user() and copy_from_user() don't just do access_ok(). Those first two functions make sure that the copy you are currently trying to achieve right now will succeed, or fail appropriately. This is not what access_ok() does for you. The documentation for access_ok() specifically says that this function doesn't guarantee that memory accesses will actually succeed:
Note that, depending on architecture, this function probably just
checks that the pointer is in the user space range - after calling
this function, memory access functions may still return -EFAULT.
For example, my most recent source code has, for x86, runtime checking that goes beyond access_ok(): https://lwn.net/Articles/612153/
Yet a third answer: memcpy() probably isn't much more efficient. You might save a few instructions here and there in principle, but those setup and checking instructions are going to be negligible once you're copying anything more than the smallest quantities of data. |
As you did not mention it, I will assume you are in a Windows Active Directory domain environment? I say that because the command "ktpass" given in your example is native to Windows. Based on this I will assume that your Active Directory DNS domain name is abc.com and Kerberos realm name is ABC.COM.
When you create a keytab, the SPN gets mapped to the user or computer object (principal, in Kerberos terms) at that time so you don't need to adjust the SPN of that principal afterwards unless you are adding them as secondary SPNs.
Do yourself a favor and place the Kerberos realm name in upper-case inside your keytab creation command. Its best to randomize the password so nobody knows it. Kerberos SSO functionality will work just fine with that. And the Kerberos realm name needs to be appended to the "/mapUser" argument. I've modified your example into a better one you should use.
Its outside the scope of your question but don't use DES encryption anymore. It's long been out of favor in the industry. I won't say more on that because that's not what you're asking about.
Don't use the "setspn -a" syntax to add or create SPNs on principals, use "setspn -s" instead as the "-s" checks for duplicates SPNs while the "-a" does not (see: “setspn -s” vs. “setspn -a”).
Ensure that you fully-qualify the host part of the SPN (i.e., dummy.abc.com, rather than just dummy). Else, the authentication mechanism might immediately try NTLM instead of Kerberos which is not what you want.
In a simple environment consisting of just a single DNS domain and single Kerberos realm, and having the Kerberos realm to DNS domain mappings are already set (usually by /etc/krb5.conf when on UNIX/Linux, Windows handles that automatically but if it doesn't then it will try C:\Windows\krb5.ini if present), while you would not need to qualify the Kerberos realm as part of the SPN when running "setspn -a" or "setspn -s", you should do so inside your Kerberos creation command.
So, in your case, based on everything I mentioned, while you can use:
setspn -a CS/dummy dummyuser
It would be better to do it this way instead:
setspn -s CS/dummy.abc.com dummyuser
For extra credit I've also modified your keytab creation command accordingly, though keeping the DES part so as not to further confuse.
ktpass +rndPass -out dummy.1.keytab -princ CS/[email protected] -crypto DES-CBC-MD5 +DumpSalt -ptype KRB5_NT_PRINCIPAL +desOnly /mapOp set /mapUser [email protected]
|
Using Shared Access Signature (SAS) could be a solution but it is probably an overkill in the given scenario.
In provided scenario the Blob Storage with public access is the most practical way to store files. From documentation:
... You can specify that a container and its blobs, or a specific blob, are available for public access. When you indicate that a container or blob is public, anyone can read it anonymously; no authentication is required. Public containers and blobs are useful for exposing resources such as media and documents that are hosted on websites. To decrease network latency for a global audience, you can cache blob data used by websites with the Azure CDN.
https://azure.microsoft.com/en-us/documentation/articles/storage-introduction/
To set container permissions from the Azure Portal, follow these steps:
Navigate to the dashboard for your storage account.
Select the container name from the list. Clicking the name exposes the blobs in the chosen container
Select Access policy from the toolbar.
In the Access type field, select "Blob"
|
Buffer overflow.
How does code know to not overfill DNI[]? Only the address of DNI was passed to sscanf(). Since OP's code did not provide enough space for the input, DNI[] was overwritten leading to undefined behavior (UB).
sscanf(example, "%[^;] ... ", DNI, ....
Instead, pass the maximum number of characters to read, the width. A 9 will limit scan up to 9 characters. As the array is size 10, there will be enough memory for the 9 characters and the appended null character. Code also needs to make the buffers larger to accommodate user input.
char DNI[10*2];
sscanf(example, "%19[^;] ... ", DNI, ....
But was the entire scan right?
A simple method is to record the offset of the scan with " %n". If the scan reached the "%n", n will have a new value.
int n = -1;
sscanf(example, "%9[^;] ;... %n", DNI, ..., &n);
Put this together
int main(int argc, char **argv) {
const char example[]="13902796D ; 1 ;";
char DNI[10*2];
int uni;
int n = -1;
sscanf(example, "%19[^;] ; %d ; %n", DNI, &uni, &n);
if (n < 0) {
puts("Fail");
} else {
printf("DNI: %s\n", DNI);
printf("Uni: %d\n", uni);
}
return 0;
}
Code still has issues as DNI[] is likely "13902796D " (note trailing space), but that is another issue for OP to handle. |
This was solved by adding a voter to the existing list of decision voters.
The first step was to create a custom voter class
public class CustomVoter implements AccessDecisionVoter {
@Override
public boolean supports(ConfigAttribute attribute) {
return true;
}
@Override
public int vote(Authentication authentication, Object object, Collection collection) {
//Place your decision code here
if( check_is_true() ) {
//grant access
return ACCESS_GRANTED;
} else if ( check_is_false() ) {
//deny access
return ACCESS_DENIED;
} else {
//do not make a choice
return ACCESS_ABSTAIN;
}
}
@Override
public boolean supports(Class clazz) {
return true;
}
}
We now need to add this voter to the list of voters that will make the access decision.
@Configuration
public class DecisionVotersConfiguration {
@Autowired
MethodInterceptor methodSecurityInterceptor;
@PostConstruct
@DependsOn("methodSecurityInterceptor")
public void modifyAccessDecisionManager() {
((AffirmativeBased)((MethodSecurityInterceptor)methodSecurityInterceptor).getAccessDecisionManager()).getDecisionVoters().add(0, new CustomVoter());
}
}
This will add your custom decision voter the list of decision voters. By placing it at index 0 it will be checked first. This will allow the voter to grant access before later checks would deny access. The method in this configuration class will depend on the methodSecurityInterceptor being created which will have the initial list of decision voters. |
So you’ve got two questions in here (you didn't ask how to actually solve the problem, to do that more details would be needed - see comments).
You asked "The proxy is configured with kerberos, and has clearly the service name HTTP/proxy.foo.bar set in it's configs. How does the client know which service name to request the ticket to?"
A. It works pretty much like this. The client types in a URL in the web browser or clicks on a hyperlink. It looks up the IP host in DNS domain which matches the host name in the URL. Then it goes to that IP host, looking for the service defined in the URL, in this case it is the HTTP service. If it receives an HTTP 401 Negotiate challenge (it's 401, not 407) from the web server, due to it being Kerberos-protected, it goes to its KDC and requests a Kerberos service ticket for HTTP/proxy.foo.bar, zips back to proxy.foo.bar and presents the ticket to that host for the HTTP service running on it. The host validates this ticket and if all is well and the client web browser renders the HTML. You've seen the Kerberos ticket ticket when you ran klist on the client. I don't have any web references for you, this is all off the top of my head.
You also asked “Does it request the ticket to the domain name he's making request to (in this case it is proxy.foo.bar indeed), or does it receive the name in the authentication sequence, in a 407 reply in this case (which doest contain the negotiate challenge, but I just don't know if there's a way to look into it) ?”
A. Your question was a bit hard to follow but if I am understanding you correctly, the answer is the web client requests a ticket as a result of the HTTP 401 Negotiate authentication challenge from the web server (see above).
There’s many diagrams sequencing this process on the web, including here: http://www.zeroshell.org/kerberos/Kerberos-operation/ |
Here is a solution, it allows you to define a series of users, enables basic http authentication and uses the logged in username for the fit commits.
require 'rubygems'
require 'gollum/app'
gollum_path = File.expand_path(File.dirname(__FILE__))
wiki_options = {
:live_preview => false,
:allow_editing => true,
:allow_uploads => true,
:universal_toc => false,
}
users = {'user' => 'password',
'user2' => 'password2'}
use Rack::Auth::Basic, 'realm' do |username, password|
if users.key?(username) && users[username] == password
Precious::App.set(:loggedInUser, username)
end
end
Precious::App.set(:gollum_path, gollum_path)
Precious::App.set(:default_markup, :markdown)
Precious::App.set(:wiki_options, wiki_options)
run Precious::App
#set author
class Precious::App
before do
session['gollum.author'] = {
:name => "%s" % settings.loggedInUser,
:email => "%[email protected]" % settings.loggedInUser,
}
end
end
|
It's neither deep wisdom nor sloppy code. It's just the nature of a distributed system.
Your Git repository is yours. You control it, in all aspects. You decide what to put in and what to keep out. You also decide whether and when to digitally sign commits and/or tags (see git tag --sign and a mountain of PGP documentation).
You do, of course, also have control at transfer points. Specifically, at various times, someone gives you some set of commits and/or tags, plus the stuff that goes with them (trees and blobs), and asks you to put them in your repository. This operation is git fetch if you are retrieving data from them, or git push if they are sending data to you. You can, at that time, decide whether to accept them or reject them. Git provides direct control over this rather binary operation through "hooks".
You could both reject them (tell the other end "no") and secretly copy and modify them, so that you secretly accept them but change them. One can even imagine a system in which this process is formalized and allowed directly during a fetch or push session: "I see you are offering me these commits and other objects, but I don't like them as they are, I will modify them."
There are some good technical reasons not to do it this way. In particular, the identity of a Git object is a cryptographic hash of its contents, and if the receiving Git were to adjust or replace some or all of the contents, it would necessarily also come up with a new hash. The hash function is deliberately designed to be "one-way", i.e., given just a hash, it is very difficult to come up with contents that produce that hash. Therefore, for this to work very well, the receiving Git would not only have to say to the sending Git: "I don't really like that, but I will take it if you change it to this"—and thus become a sending Git, and now the original sender becomes the receiver and has to do the same thing yet again. So instead, where Git does implement accept-or-reject, there is no intermediate version: the receiving Git simply rejects the attempt and it's now up to the sender to choose whether to correct the problem.
(The actual vetting process is really run only on push since fetch just puts new commits in a place where you, the person in control of your repository, can examine them before storing them under your names. There is virtually no vetting for tags on fetch, even though they go into a global name-space: any tags you already have are retained, rejecting the attempt to store the new one, but any tags you do not have are accepted and stored, and you would have to manually rip them out if you decide you hate them after all.)
GitHub has its own Git repositories, and GitHub could do this kind of vetting: make sure that incoming pushed commits have, as their user name and email address, something that matches valid a user name and email address as stored in the account information that whoever is doing the push used to authenticate themselves as themselves. It's merely traditional not to bother, since this would also be a pain for people who aggregate others' work and therefore push commits that deliberately give credit to the original author. One would presumably also have to bypass it on the initial push creating a new GitHub repository for some existing, long-running project with many authors.
Note that what you give to GitHub is not a user-name-and-email-address, though: it is, instead, some sort of authentication credential (such as an ssh key, or a time-limited authentication cookie). It tells GitHub that you know some sort of shared secret: that you are (probably) you. (GitHub does keep a mapping: ssh keys map to a GitHub account, and GitHub obviously has an email address associated with the account.) |
C++ supports the scanf function. There is no simple alternative, especially if you want to replicate the exact semantics of scanf() with all the quirks.
Note however that your code has several issues:
You do not pass the maximum number of characters to read into ps1 and ps2. Any sufficiently input sequence will cause a buffer overflow with dire consequences.
You could simplify the first format %*[ \t\n] with just a space in the format string. This would also allow for the case where no whitespace characters are present. As currently written, scanf() would fail and return 0 if no whitspace characters are present before the ".
Similarly, if no non letters or if no other characters follow before the second ", scanf would return a short count of 0 or 1 and leave one or both destination array in an indeterminate state.
For all these reasons, it would be much safer and predictable in C to first read a line of input with fgets() and use sscanf() or parse the line by hand.
In C++, you definitely want to use the std::regex package defined in <regex.h>. |
You can disable safety check:
From the manual:
-fstack-protector
Emit extra code to check for buffer overflows, such as stack smashing attacks. >This is done by adding a guard variable to functions with
vulnerable objects. This includes functions that call alloca, and functions with >buffers larger than 8 bytes. The guards are initialized when
a function is entered and then checked when the function exits. If a guard check >fails, an error message is printed and the program exits.
-fstack-protector-all
Like -fstack-protector except that all functions are protected.
If you would like to disable this just put no no- to the option name
-fno-stack-protector -fno-stack-protector-all
Buffer overflow example:
int main(){
int valid = 0;
char str1 = ["START"];
char str2 = [8];
gets(str2);
if(strncmp(str1, str2, 8) == 0){
valid = 1;
cout << "buffer: " << str1 << ", " << str2 << ", " << valid << endl;
}
}
|
Here is an example of how I handle my routing with child routes. I think this will help you and teach you to use child routes to provide Guard for some of your components. This will secure some views if the user is lacking authentication. I separate mine in public and secure routing everything through the layout then loading the routes for which ever layout is chosen.
Make sure to export the child routes and that the correct routes are called in the layout route. Also make sure you use redirectTo in each child routes file.
We are defining our layouts public or secure. Then providing the routes file in each of those directories to take over once the create route is picked.
app.routing.ts
const APP_ROUTES: Routes = [
{ path: '', redirectTo: '/home', pathMatch: 'full', },
{ path: '', component: PublicComponent, data: { title: 'Public Views' }, children: PUBLIC_ROUTES },
{ path: '', component: SecureComponent, canActivate: [Guard], data: { title: 'Secure Views' }, children: SECURE_ROUTES }
];
Then I have a layouts folder
layouts
layouts/public/public.components.ts
layouts/public/public.components.html
layouts/secure/secure.components.ts
layouts/secure/secure.components.html
secure.component.ts which is the layout looks like this,
import { Component, OnInit } from '@angular/core';
import { Router } from '@angular/router';
import { Auth } from './../services/auth.service';
@Component({
providers: [ Auth ],
selector: 'app-dashboard',
templateUrl: './secure.component.html'
})
export class SecureComponent implements OnInit {
constructor( private router: Router, private auth: Auth ) { }
ngOnInit(): void { }
}
Then in your secure directory you can create a component and select the template you will use for it,
@Component({
providers: [ Auth ],
templateUrl: './profile.component.html'
})
export class ProfileComponent implements OnInit {
constructor( private router: Router, private auth: Auth, private http: Http ) { }
ngOnInit() { }
}
Now make sure to create your child routes in the secure and public directory. Once the route is hit the child route will load the correct class and template file will be rendered.
Remember they will be children of your layouts. So you can put a navigation bar and footer in secure.component.html and it will show up in all of your secure components. Because we are using selectors to load the content. All of your components secure and public will be loaded into the selctory inside the layouts html file.
child routes
/public/piublic.routes.ts
export const PUBLIC_ROUTES: Routes = [
{ path: '', redirectTo: 'home', pathMatch: 'full' },
{ path: 'p404', component: p404Component },
{ path: 'e500', component: e500Component },
{ path: 'login', component: LoginComponent },
{ path: 'register', component: RegisterComponent },
{ path: 'home', component: HomeComponent }
];
/secure/secure.routes.ts
export const SECURE_ROUTES: Routes = [
{ path: '', redirectTo: 'overview', pathMatch: 'full' },
{ path: 'items', component: ItemsComponent },
{ path: 'overview', component: OverviewComponent },
{ path: 'profile', component: ProfileComponent },
{ path: 'reports', component: ReportsComponent }
];
Summary
We have setup an initial rout file in the root directory of our Angular2 app. This route file directs traffic to one of two layouts depending on if the user is authenticated or not. If they have the authentication for whichever route public layout or secure layout iss served. Then each of those layouts have a bunch of child routes and components which are served to the respective layout.
So to clear the file structure up,
root = /
You main app routes which control which layout is viewed.
/app.routing.ts
Layouts which hold the layouts secure or public.
Public
`/layouts/public.components.ts
/layouts/public.components.html
/layouts/public.routing.ts`
Secure
`/layouts/secure.components.ts
/layouts/secure.components.html
/layouts/secure.routing.ts`
public directory which holds anything that is open to view without auth.
/public/home-component.ts
/public/home-component.html
Secure directory which holds the auth needed routes and components.
/public/profile-component.ts
/public/profile-component.html
|
Most APIs would used a POST request for authentication. They expect as well to receive the data to validate (user/password). Also, they usually require extra information in the header like to specify the format (e.g. application/json) of the data (the user/password data) you are sending. You are not passing any of that. Check below something that might work, but it all depends of what the API you are hitting is expecting (check its documentation).
fetch(url, {
method: 'POST',
headers: {
// Check what headers the API needs. A couple of usuals right below
'Accept': 'application/json',
'Content-Type': 'application/json'
},
body: JSON.stringify({
// Validation data coming from a form usually
email: email,
password: password
}
}).then(function (response) {
if (response.status != 200) {
dispatch(setError(response.status + '===>' + response.statusText + '===>' + response.url))
}
return response.json();
}).then(function (json) {
dispatch(setData(json, q))
}).catch(function(err){
console.log(err);
};
|
You're correct that at this point the most comprehensive solutions for authentication and authorization in systems that rely heavily on HTTP are based on OAuth 2.0 and OpenID Connect. This of course includes your specific scenario of a SPA calling a Web API back-end. For further read on this generic case you can check the Auth0 SPA + API Architecture Scenario or give a look at the quickstarts focused on your selected technologies:
Angular2 Quickstart
ASP.NET Core Web API Quickstart
Note: Auth0 supports OAuth 2.0/OpenID Connect so even though the docs may have additional features that are provider-specific, you may find them useful if you do indeed decide to go the OAuth 2.0/OpenID Connect route. It's one of the advantage points of relying on standards, it's easier to switch between implementation/providers.
However, you should also consider if you really need to go full OAuth 2.0/OpenID Connect as they aim to solve a lot of different use cases and as such also bring significant complexity with them. If you go that route, it's recommended that you leverage existing libraries like IdentityServer or cloud providers like Auth0, because doing your own implementation carries a lot of risk and requires significant effort.
In order to meet your requirement of providing an integrated login from within your own Angular2 front-end you could probably look into the resource owner password credentials grant specified by OAuth2.
Another alternative is doing your own custom solution, this is generally frowned upon, because it's easy to get wrong, but the theory would be:
Handle user authentication and registration (possibly using ASP .NET Identity)
Upon login exchange user credentials with some a token that you can later use to call into the API
The token could just be a random (not guessable) value used as a reference to some server-side storage that would contain information about the associated user. |
Each one in the list is a Java Authentication and Authorization Service (JAAS) configuration, which in turn contains an IBM-implementation of the JAAS Login Module.
According to the reference page, Login configuration for Java Authentication and Authorization Service:
The WSLogin module defines a login configuration and the LoginModule implementation that can be used by applications in general.
The ClientContainer module defines a login configuration and the LoginModule implementation that is similar to the WSLogin module, but enforces the requirements of the WebSphere Application Server client container.
The DefaultPrincipalMapping module defines a special LoginModule that is typically used by Java 2 Connector to map an authenticated WebSphere Application Server user identity to a set of user authentication data (user ID and password) for the specified back-end enterprise information system (EIS).
So for general use, you can use the WSLogin module. When you use a Java EE client, use the ClientContainer module. And when using Java 2 Connectors, use the DefaultPrincipalMapping module. |
Some clarification is in order. Cognito has several parts. The part that does "Authentication" (which is what you are talking about) is called "Cognito User Pools". Not to be confused with Cognito Federated Identity Pools.
With User Pools you can create usernames and password combinations with attributes, and these can be used to authenticate and deliver a persistent, cross device, Cognito Federated identity identityId to a user (across multiple devices).
Once logged in, the Federated Identity Pool is hooked to roles which can get your "Authorized" to use AWS services (like Dynamo DB etc).
It can be tricky to get all these parts working together and AWS has an online site called "Mobile Hub" that will build code for you and download an xcode project. This process configures the Federated Identity Pool and the User Pool correctly, and connects them all up to a set of example code.
Connecting the credentials provider to the user pool to the identity pool is a bit counterintuitive, but the AWSIdentityManager in the aws-mobilehub-helper-ios on github manages all that for you. So I would recommend starting with mobile hub on the console.
Cognito is a somewhat confusing system, here is a link to a brief powerpoint that hits the highlights of how it works (for people that can't understand the AWS docs (like me)).
With that said, "how to check if a user already exists?"
The most reasonable approach is to create the user (via signup), and get a reject if the name is in use, and suggest that your user try a different username. With respect to the email being in use, you will get that reject upon confirmation (signup sends confirmation id's by email and/or via text). This can be overridden to reclaim the email address, or you can do a test beforehand to see if the email is in use by attempting to log in and looking at the failure code.
you can fetch the user as the other answer suggests, however if you have established in user pools an alias for login (like email) you will find this problematic, because this just tells you if someone has the user name, not if someone is already using the email address, and you will get a reject later at confirmation time. |
There's a lot of things that can influence the best way to approach this; based on the provided information it's possible to showcase applicable options and do some recommendations, but the definitive choice is hard to make without having all the context.
TL;DR On a browser-based application where end-users are issued username/password credentials and the applications needs to make calls to an API on behalf of the users you can either use the implicit grant or the resource owner password credentials grant (ROPC). The use of ROPC should be further restricted to client applications where there is an high degree of trust between the application and the entity that controls the user's credentials.
The use of client credentials is completely out of scope and the authorization code grant may not present any benefit over the implicit grant for what a browser-based application is concerned so by a process of elimination we have two eligible grants.
Resource owner password credentials grant
(check Auth0 ROPC Overview for full details on the steps)
This grant was primarily introduced in OAuth 2.0 as a way to provide a seamless migration path for application that were storing username/password credentials in order to have access to user resources without constantly asking the user to provide the credentials. As you can imagine, storing passwords in plain text is a big no no, so having a way to stop doing that with a very simple migration path (exchange the stored credentials with tokens using this grant) would be a big improvement.
However, access tokens expire so the way to keep obtaining new tokens for access was through the use of refresh tokens. Keeping refresh tokens in storage is better than passwords, but they are still usually long-lived credentials so the act of storing those type of tokens has additional security considerations. Because of this, it's not usually recommended to keep/use refresh tokens in browser-based applications.
If you choose this grant you need to decide what will happen when the access tokens expire.
Implicit grant
(check Auth0 Implicit Grant Overview for full details on the steps)
This grant is a simplified version of the authorization code grant specifically aimed at applications implemented within a browser environment so it does seem be a better fit to your scenario.
An additional benefit is that obtaining new access tokens could happen transparently after the first user authentication, more specifically, by having the authorization server manage some concept of session any implicit grant request for a user that is already authenticated and that already consented to provide that authorization could be done automatically without requiring user interaction.
Conclusion (aka opinion based on what I know which may not be sufficient)
As a general recommendation I would choose the implicit grant over the ROPC grant. |
Posting this here, since this is the first thread I found when searching for running NodeJS Passport authentication on Lamdba.
Since you can run Express apps on Lamda, you really could run Passport on Lambda directly. However, Passport is really middleware specifically for Express, and if you're designing for Lamda in the first place you probably don't want the bloat of Express (Since the API Gateway basically does all that).
As @Jason has mentioned you can utilizing a custom authorizer. This seems pretty straight-forward, but who wants to build all the possible auth methods? That's one of the advantages of Passport, people have already done this for you.
If you're using the Servlerless Framework, someone has built out the "Serverless-authentication" project. This includes modules for many of the standard auth providers: Facebook, Google, Microsoft. There is also a boilerplate for building out more auth providers.
It took me a good bunch of research to run across all of this, so hopefully it will help someone else out. |
How does the original cookie get created for the actual CookieAuthenticationMiddleware.
The cookie authentication middleware signs in the user and creates the cookie. The cookie authorization middleware looks for the cookie values.
To illustrate, here are a few lines from the CookieAuthenticationHandler, which show the cookie authentication middleware setting a cookie.
Options.CookieManager.AppendResponseCookie(
Context,
Options.CookieName,
cookieValue,
cookieOptions);
And here are a few lines from the DenyAnonymousAuthorizationRequirement : IAuthorizationHandler that show the authorization middleware looking for a cookie value that has been added to the context.
var user = context.User;
var userIsAnonymous =
user?.Identity == null ||
!user.Identities.Any(i => i.IsAuthenticated);
if (!userIsAnonymous)
{
context.Succeed(requirement);
}
You also asked this:
...how would the Facebook authentication set a cookie that the cookie authentication middleware would recognize?
Facebook authentication is OAuth authentication, which means that it is at the bottom of the following inheritance hierarchy.
IAuthenticationHandler
└── AuthenticationHandler
└── CookieAuthenticationHandler
└── RemoteAuthenticationHandler
└── OAuthHandler
└── FacebookHandler
In Facebook OAuth, RemoteAuthenticationHander.HandleRemoteCallbackAsync handles Facebook's response and then makes a call to SignInAsync.
Context.Authentication.SignInAsync(
Options.SignInScheme,
context.Principal,
context.Properties);
That call to SignInAsync is what you hypothesized in your question.
...the Facebook middleware calls [SignInAsync] to set the cookie, but I don't really understand what this method does.
What SignInAsync does is complicated. The call to SignInAsync happens on the instance of AuthenticationManager that lives inside the Context.Authentication property. That instance is usually the DefaultAuthenticationManager. You can see the full SignInAsync call here, and here is a snippet from it.
public override async Task SignInAsync(
string authenticationScheme,
ClaimsPrincipal principal,
AuthenticationProperties properties)
{
...
var handler = HttpAuthenticationFeature.Handler;
var signInContext = new SignInContext(
authenticationScheme,
principal,
properties?.Items);
if (handler != null)
{
await handler.SignInAsync(signInContext);
}
...
}
In other words, SignInAsync calls SignInAsync on an authentication handler, which happens to be an instance of an IAuthenticationHandler from the inheritance hierarchy.
We eventually arrive at CookieAuthenticationHandler.HandleSigninAsync, which will append the cookie to the response. This cookie handler contains the only two calls to AppendResponseCookie in the Security repository.
protected override async Task HandleSignInAsync(SignInContext signin)
{
...
Options.CookieManager.AppendResponseCookie(
Context,
Options.CookieName,
cookieValue,
signInContext.CookieOptions);
...
}
There are a lot of moving parts. Hopefully this answer gives enough of an overview to fill in the missing pieces yourself. |
You can use ScalaPB to generate the gRPC stubs for Scala. First, add the plugin to your project/plugins.sbt:
addSbtPlugin("com.thesamet" % "sbt-protoc" % "0.99.1")
libraryDependencies += "com.trueaccord.scalapb" %% "compilerplugin" % "0.5.43"
Then, add this to your build.sbt:
libraryDependencies ++= Seq(
"io.grpc" % "grpc-netty" % "1.0.1",
"io.grpc" % "grpc-stub" % "1.0.1",
"io.grpc" % "grpc-auth" % "1.0.1",
"com.trueaccord.scalapb" %% "scalapb-runtime-grpc" % "0.5.43",
"io.netty" % "netty-tcnative-boringssl-static" % "1.1.33.Fork19", // SSL support
"javassist" % "javassist" % "3.12.1.GA" // Improves Netty performance
)
PB.targets in Compile := Seq(
scalapb.gen(grpc = true, flatPackage = true) -> (sourceManaged in Compile).value
)
Now you can put your .proto files in src/main/protobuf and they will be picked up by ScalaPB.
I have an example Scala gRPC project here. It shows how to configure mutual TLS authentication, user sessions using JSON Web Tokens, a JSON gateway via grpc-gateway, and deployment to Kubernetes via Helm. |
If using env.put(Context.SECURITY_AUTHENTICATION, "none"); did not work, then your AD environment may not support anonymous authentication, understandably. Anonymous authentication is disabled by default.
When you use new DirectoryEntry(...); in C#, it may seem like you are not using any credentials, but it really uses the credentials of the currently logged on user. So it's borrowing your own credentials to make the call to AD.
Java does not do that. In fact, from the brief Googling I've done just now, it seems like it's quite difficult to make it do that, if that's what you want to do.
There is a question about how to do that here: Query Active Directory in Java using a logged on user on windows
The comment there gives a couple products to look into.
But if you don't want to use any third-party products that may complicate things, you can just provide a username and password.
If you want to test anonymous authentication in C#, you can use something like this:
new DirectoryEntry(ldapPath, null, null, AuthenticationTypes.Anonymous)
|
In addition to IBM’s article “Introduction to Diameter” already mentioned by Hamed in a previous answer, Cisco’s article “Authentication, authorization, and accounting overview” also has some interesting information about Diameter, comparing it to RADIUS.
“Authentication identifies a user; authorization determines what that user can do; and accounting monitors the network usage time for billing purposes.” … “Diameter is the next-generation AAA protocol and overcomes (several) RADIUS deficiencies.”
“The RADIUS protocol carries authentication, authorization and configuration information between a NAS and a RADIUS authentication server.” (In this context, a NAS is a network access server, a gateway providing access to a protected network.) … “Implemented by several vendors of network access servers, RADIUS has gained support among a wide customer base.” RADIUS has codes for a limited number of attributes (including user name and password, service type, login information, etc.), so developers took advantage of its “vendor-specific attribute” (VSA) to exchange custom data, extending (in a proprietary manner) the scope of RADIUS, yet staying within its restrictions (e.g., attribute value no longer than 253 bytes). So, why use Diameter? Diameter offers much greater flexibility (longer data field, expandability, capability negotiation), higher performance (“32-bit alignment”), greater reliability & availability (TCP and STCP support, better acknowledgement mechanism and error messages, failover), increased security (“end-to-end security”), etc.
Just like FreeRADIUS implements a FOSS RADIUS server that you can install on a server so that your other applications and devices (e.g., a Wi-Fi access point or wired switch performing 802.1X authentication) can interact with it, FreeDiameter is a FOSS Diameter framework that you can install on a server. However, while TMCNews’ article “The Role of Diameter in IMS” (2007) mentions that “Diameter has been heavily adopted by the 3GPP in the IMS standards set”, I haven’t seen any consumer- or small-business-grade application or device that makes use of Diameter. On the other hand, Diameter is reportedly “backward compatible with RADIUS to ease migration” (Cisco), through a “translation agent” (IBM). For example, FreeDiameter has an extension, the “RADIUS/Diameter extensible gateway” (app_radgw.fdx), whose “purpose is to allow a RADIUS client to work with a Diameter server”. However, the documentation also warns that the “translation of RADIUS messages to Diameter is quite a complex task. It is likely that the translation plug-ins need some fine-tuning to fit your particular needs.”
|
"Quoted string" is very ambiguous. For example, in shell the dollar sign is special (and often should be escaped, as some other characters). But in HTML the <, >, &, ', " are special (and often should be escaped). In SQL statements you should only escape the double-quote and the nul character. In C, you would escape control characters and the quote and double-quote and backslash, etc... In JSON rules are slightly different.
So first code the appropriate quotation transformations. Perhaps you want to implement the following functions
QString quoted_for_shell(const QString&);
QString quoted_for_html(const QString&);
QString quoted_for_c(const QString&);
and so on.
(perhaps you should also want to code the reverse unquote transformations; BTW quotations might be tricky: how would you quote my full name in Russian, Cyrillic letters: Василий Дмитриевич Старынкевич in C since not all C implementations use UTF-8, even if they should)
Once you have implemented your quotation machinery (and that is perhaps harder and more ill-defined than you think!), you "just" want to copy QStrings to the clipboard. Then read documentation of QClipboard and perhaps the chapter on drag and drop.
BTW, beware of code injection (which is partly why quoting is really important). Think of some malicious rm -rf $HOME string etc....
Actually, clipboard processing is a delicate thing with X11. See ICCCM & EWMH. You very probably need some event loop running (notably for very long strings of many millions bytes, the selection processing has then to be incremental with several handshakes, and details could be tricky, but are handled by Qt). So you might need QApplication::exec |
TLDR;
Validate the ID token before trusting what it says.
More Details
What is intent of ID token expiry time in OpenID Connect?
The intent is to allow the client to validate the ID token, and the client must validate the ID token before operations that use the ID token's information.
From the OpenID Implicit Flow spec:
If any of the validation procedures defined in this document fail, any operations requiring the information that failed to correctly validate MUST be aborted and the information that failed to validate MUST NOT be used.
To corroborate that, Google's OpenID Connect documentation says this about ID token validation:
One thing that makes ID tokens useful is that fact that you can pass them around different components of your app. These components can use an ID token as a lightweight authentication mechanism authenticating the app and the user. But before you can use the information in the ID token or rely on it as an assertion that the user has authenticated, you must validate it.
So, if our client application is going to take some action based on the content of the ID token, then we must again validate the ID token. |
It seems that the meaning of "stateless" is being (hypothetically) taken beyond its practical expression.
Consider a web system with no DB at all. You call a (RESTful) API, you always get the exactly the same results. This is perfectly stateless... But this is perfectly not a real system, either.
A real system, in practically every implementation, holds data. Moreover, that data is the "resources" that RESTful API allows us to access. Of course, data changes, due to API calls as well. So, if you get a resource's value, change its value, and then get its value again, you will get a different value than the first read; however, this clearly does not say that the reads themselves were not stateless. They are stateless in the sense that they represent the very same action (or, more exact, resource) for each call. Change has to be manually done, using another RESTful API, to change the resource value, that will then be reflected in the next call.
However, what will be the case if we have a resource that changes without a manual, standard API verb?
For example, suppose that we have a resource that counts the number of times some other resource was accessed. Or some other resource that is being populated from some other third party data. Clearly, this is still a stateless protocol.
Moreover, in some sense, almost any system -- say, any system that includes an authentication mechanism -- responds differently for the same API calls, depending, for example, on the user's privileges. And yet, clearly, RESTful systems are not forbidden to authenticate their users...
In short, stateless systems are stateless for the sake of that protocol. If Google tracks the calls so that if I call the same resource in the same session I will get different answers, then it breaks the stateless requirement. But so long as the returned response is different due to application level data, and are not session related, this requirement is not broken.
AFAIK, what Google does is not necessarily related to sessions. If the same user will run the same search under completely identical conditions (e.g., IP, geographical location, OS, browser, etc.), they will get the very same response. If a new identical search will produce different results due to what Google have "learnt" in the last call, it is still stateless, because -- again -- that second call would have produced the very same result if it was done in another session but under identical conditions. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.