Blogs

  • 2024/Oct/Tue
  • Admin

Improving Laravel Application Security with Aikido

As your Laravel application grows, managing security objectives becomes more challenging, especially for small teams or solo developers

Today, Laravel has teamed up with Aikido to provide a seamless solution for securing your Laravel application

With Aikido, Laravel developers using Forge can effortlessly scan for and identify potential security vulnerabilities, all in less than 1 minute

Why Aikido?

Aikido scans your application for potential vulnerabilities and security flags, and surfaces relevant security findings directly within Forge to allow you to more quickly secure your application

We’ve had the pleasure of getting to know the team at Aikido as they’ve scaled their security platform to over 3,000 organizations and 6,000 developers in the US and EU

Their developer-first approach to security scanning and vulnerability flagging is already resonating in our community with 30% of their user base already leveraging Laravel

Furthermore, we appreciate how Aikido themselves are a company built and fostered on PHP

Aikido built their platform on PHP, and previously built multiple SaaS companies including Teamleader, Officient and Futureproofed, all in PHP

The team is committed to helping to build and secure the next generation of incredible companies built on Laravel and PHP

Who should use Aikido?

No matter how big or small your Laravel application is, security matters to anyone hoping to scale their products to tens, hundreds, thousands, or millions of users

With increasing regulations and compliance standards such as SOC2, ISO 27001, HIPAA, and more, security requirements are now necessary for companies of all sizes to consider

However, we understand that developers (especially on smaller teams) can feel bogged down by managing their security

Between expensive and complex tools, tons of false positives, and developer fatigue, no one wants to have to check security anymore

Aikido alleviates these burdens by bringing together code and cloud security scanners directly to Forge

We see the partnership with Aikido as valuable to any and all developers using Forge, and will bring Forge further along its journey as the best place to manage your Laravel applications

Integration with Laravel Forge

We’ve made adding Aikido’s security scanning to your Laravel application as simple as possible, by allowing you to connect your application to Aikido within Forge

By adding Aikido to your application through Forge, you enable Aikido to fetch and display security issues

Developers will see results in Forge in <1 minute, can set real alerts, and have a step by step guide to fixing any issues

You can add Aikido by following these steps in Forge:

1: Log in to Laravel Forge

2: Navigate to "Account Settings", where you can then find the new Aikido integration

3: Follow the prompts to link your Aikido workspace and make sure to give it access to the repositories you want scanned

4: Navigate to a Forge site and check out the security findings in the new Aikido integration tab

  • 2024/Oct/Tue
  • Admin

Laravel Application Monitoring & Debugging with Sentry

Laravel Application Monitoring & Debugging with Sentry

At Laravel, we equip PHP developers with the most advanced tools to create exceptional applications

Today, we announce our partnership with Sentry, making it a preferred monitoring and debugging solution for Laravel projects using Forge or Vapor

This collaboration marks a significant step forward in our mission to ensure that PHP developers have access to modern, powerful tools to streamline app development

Why Sentry?

When it comes to application monitoring, having a centralized platform to understand how errors and performance issues impact your application is crucial for fixing the broken code quickly

Sentry stands out as a world-leading debugging platform, trusted by Fortune 500 companies worldwide

It offers developers real time code visibility and debugging capabilities, helping developers identify and resolve issues swiftly

By integrating Sentry with Laravel’s services, we are providing PHP developers with a robust solution to maintain high-quality, error-free, and performant applications

Sentry already serves thousands of Laravel applications and we’re thrilled with their increased commitment to improving workflows for PHP developers

Who should use Sentry?

Whether you’re a solo developer, part of a small team, or working on enterprise-level projects, Sentry helps you identify the root cause of an issue, down to the broken line of code

Its scalability and flexibility make it an ideal choice for anyone building web applications with Laravel

Sentry ensures that you have a comprehensive view of errors and slowdowns affecting your app, enabling you to maintain a seamless user experience and robust application performance

Benefits for PHP Developers

If you’re working on an existing PHP project, you may already have Sentry installed

If you don’t or if you’re just getting started on a new endeavor, awesome — you can sign up for and add Sentry to your page/site from Forge or Vapor

The integration will automatically notify Sentry that you created a new page/site and will create an associated Sentry project for you

This is just the beginning of what we’re able to do to make sure PHP developers get the right telemetry they need even before they push to production

Integration with Laravel Forge and Vapor

We are excited to announce that integrating Sentry with Laravel Forge and Vapor is now easier than ever

Laravel Forge and Vapor are already simplifying the process of deploying and managing PHP applications

With our new integration, activating Sentry within these environments is a breeze

Here’s how you can do it:

1: Log in to Laravel Forge or Vapor 2: Navigate to “User Profile” in Forge or “Team Settings” in Vapor (when logged in as the team owner) where you can find the new Sentry integration option

3: Follow the prompts to link your Sentry account

4: Configure Sentry and connect your site to create a Sentry project create the project from the site’s or project’s Sentry panel

5: Deploy your application, and Sentry will start tracking errors immediately

  • 2024/Oct/Tue
  • Admin

Forge: Zero Downtime Deployments

Getting Started with Forge and Envoyer

To kick things off, you'll need active subscriptions for both Laravel Forge and Envoyer

Once you’re set up, navigate to your Envoyer dashboard and create a new API token

At a minimum, Forge requires the following scopes:

deployments:create

projects:create

servers:create

Next, it’s time to link Forge with your Envoyer API token

Navigate to your account settings in Forge and click on the Envoyer navigation item

When creating a new site in Forge, you’ll notice a new option labeled “Configure with Envoyer”

Toggle this option to reveal a dropdown menu, where you can either select an existing Envoyer project or create a brand new one

To deploy your Envoyer project within Forge, simply click the “Deploy Now” button, just as you would with any other site in Forge

The “Deployment Trigger URL” is also available for use in a CI environment

  • 2024/Aug/Tue
  • Admin

When to trust an AI model

Because machine-learning models can give false predictions, researchers often equip them with the ability to tell a user how confident they are about a certain decision, This is especially important in high-stake settings, such as when models are used to help identify disease in medical images or filter job applications

But a model’s uncertainty quantifications are only useful if they are accurate

If a model says it is 49 percent confident that a medical image shows a pleural effusion, then 49 percent of the time, the model should be right

MIT researchers have introduced a new approach that can improve uncertainty estimates in machine-learning models

Their method not only generates more accurate uncertainty estimates than other techniques, but does so more efficiently

In addition, because the technique is scalable, it can be applied to huge deep-learning models that are increasingly being deployed in health care and other safety-critical situations

This technique could give end users, many of whom lack machine-learning expertise, better information they can use to determine whether to trust a model’s predictions or if the model should be deployed for a particular task

It is easy to see these models perform really well in scenarios where they are very good, and then assume they will be just as good in other scenarios

This makes it especially important to push this kind of work that seeks to better calibrate the uncertainty of these models to make sure they align with human notions of uncertainty,” says lead author Nathan Ng, a graduate student at the University of Toronto who is a visiting student at MIT

Ng wrote the paper with Roger Grosse, an assistant professor of computer science at the University of Toronto; and senior author Marzyeh Ghassemi, an associate professor in the Department of Electrical Engineering and Computer Science and a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems

The research will be presented at the International Conference on Machine Learning

Quantifying uncertainty

Uncertainty quantification methods often require complex statistical calculations that don’t scale well to machine-learning models with millions of parameters

These methods also require users to make assumptions about the model and data used to train it

The MIT researchers took a different approach

They use what is known as the minimum description length principle (MDL), which does not require the assumptions that can hamper the accuracy of other methods

MDL is used to better quantify and calibrate uncertainty for test points the model has been asked to label

The technique the researchers developed, known as IF-COMP, makes MDL fast enough to use with the kinds of large deep-learning models deployed in many real-world settings

MDL involves considering all possible labels a model could give a test point

If there are many alternative labels for this point that fit well, its confidence in the label it chose should decrease accordingly

“One way to understand how confident a model is would be to tell it some counterfactual information and see how likely it is to believe you,” Ng says

For example, consider a model that says a medical image shows a pleural effusion

If the researchers tell the model this image shows an edema, and it is willing to update its belief, then the model should be less confident in its original decision

With MDL, if a model is confident when it labels a datapoint, it should use a very short code to describe that point

If it is uncertain about its decision because the point could have many other labels, it uses a longer code to capture these possibilities

The amount of code used to label a datapoint is known as stochastic data complexity

If the researchers ask the model how willing it is to update its belief about a datapoint given contrary evidence, the stochastic data complexity should decrease if the model is confident

But testing each datapoint using MDL would require an enormous amount of computation

Speeding up the process

With IF-COMP, the researchers developed an approximation technique that can accurately estimate stochastic data complexity using a special function, known as an influence function

They also employed a statistical technique called temperature-scaling, which improves the calibration of the model’s outputs

This combination of influence functions and temperature-scaling enables high-quality approximations of the stochastic data complexity

Smarter faster: the Big Think newsletter Subscribe for counterintuitive, surprising, and impactful stories delivered to your inbox every Thursday Fields marked with an * are required Email In the end, IF-COMP can efficiently produce well-calibrated uncertainty quantifications that reflect a model’s true confidence

The technique can also determine whether the model has mislabeled certain data points or reveal which data points are outliers

The researchers tested their system on these three tasks and found that it was faster and more accurate than other methods

“It is really important to have some certainty that a model is well-calibrated, and there is a growing need to detect when a specific prediction doesn’t look quite right

Auditing tools are becoming more necessary in machine-learning problems as we use large amounts of unexamined data to make models that will be applied to human-facing problems,” Ghassemi says

IF-COMP is model-agnostic, so it can provide accurate uncertainty quantifications for many types of machine-learning models

This could enable it to be deployed in a wider range of real-world settings, ultimately helping more practitioners make better decisions

“People need to understand that these systems are very fallible and can make things up as they go

A model may look like it is highly confident, but there are a ton of different things it is willing to believe given evidence to the contrary,” Ng says

In the future, the researchers are interested in applying their approach to large language models and studying other potential use cases for the minimum description length principle

  • 1
  • 2