Articles tagged with: #context

This are all of our top articles tagged with #context to help you find what you are looking for.

Replacing Docker Desktop for Mac with Colima for use with DDEV - first impressions

Switching between a Colima DDEV project and a Docker Desktop for Mac DDEV project ddev poweroff colima stop docker context use default - this is the command I alluded to above that tells the Docker client which containers we want to work with. Installing Colima alongside Docker Desktop for Mac and starting a fresh Drupal 9 site To get started, I first installed Colima using Homebrew brew install colima ddev poweroff (just to be safe) Also - recent versions of Colima revert the Docker context back to "default" when Colima is stopped, so the docker context use default command is no longer necessary. If you choose to keep using both Colima and Docker Desktop then when issuing docker commands from the command line, you'll need to first specify which containers we want to work with - Docker or Colima. For those of us that are casual Docker users (outside of DDEV), one confusing bit is that we still need the open-source docker client installed - which is installed by default with Docker Desktop for Mac. The reason for this (as I understand it) is because Colima uses the open-source Lima project for managing its containers and volumes (the latter being where DDEV project databases are stored). How I use Colima I currently have some local projects using Docker and some using Colima.The docker client is used on the command line to connect to the installed Docker provider (Colima or Docker Desktop for Mac, in this context).Regardless, I use docker context show to verify that either the "default" (Docker Desktop for Mac) or "colima" context is in use. - this is the command I alluded to above that tells the Docker client which containers we want to work with. Technically, starting and stopping Colima isn't necessary, but the ddev poweroff command when switching between the two contexts is.When colima start is run, it automatically switches docker to the "colima" context.When colima start is run, it automatically switches docker to the "colima" context.Based on the fact that Colima is open-source, Docker Desktop's new license terms, and the apparent performance gains of using Colima it seems like a no-brainer to give it a spin.In other words, if you have an existing project up-and-running in DDEV, then add Colima, then restart the project, your database won't be found.If you want to go 100% pure Colima and you uninstall Docker Desktop for Mac, you'll need to install and configure the Docker client independently.Next, I spun up a new Drupal 9 site via ddev config , ddev start , etc... (It is recommended to enabled DDEV's mutagen functionality to maximize performance). Summarizing Overall, I'm liking what I see so far.Next, I started Colima with colima start --cpu 4 --memory 4ddev start (on an existing project I had previously set up while running Docker Desktop for Mac).

Building auth endpoint with Go and AWS Lambda

And zipping it ~\Go\Bin\build-lambda-zip.exe -o main Using Windows If you’re a Windows user, you’ll need the following environment variables set before building: Leveraging Environment Variables We can see our credentials hardcoded in a codebase for now. Copy Code "" The JWT generation code looks as follows: JavaScript Copy Code Let’s highlight the example of a successful response you may have noticed in the snippet above Copy Code events. Contents of main.go will look as follows: Copy Code func clientError(status int ) (events.StatusOK, Body: jwtToken, } Error response looks as below Copy Code events. Building it: Copy Code go build -o main main.go Project structure In order to decouple our authentication logic from FaaS internals, our project will have 2 files: auth.go is where the authentication logic will reside and main.go where our logic is integrated with AWS lambda. JWT Generation Once the service verifies that credentials are valid, it issues a token which allows it’s bearer to act as a super-user. Setup Our lambda function to be called from outside over HTTP, so we place HTTP Gateway in front of it so it would look something like below in AWS Console. Introduction When I was playing around with my pet-project Kyiv Station Walk, I noticed that manually removing test data is tedious and I need to come up with a concept of the admin page.StatusText(status), } Authentication For our purposes, we’ll omit the usage of persistent storage since one pair of credentials is enough. Endpoint throttling The default settings are too high for authorization function that is not expected to be invoked often. IP whitelist Neither do we want our function to be accessible from any IP possible. JavaScript Copy Code login := os. Testing API gateway At this point our API is ready to be consumed. Conclusion Serverless is a great option for smallish nanoservices.SignedString(jwtKey) } Since an adversary who intercepts such token may act on behalf of super-user, we don’t want this token to be effective infinitely because this will grant adversary infinite privileges.New( " auth failed" ) } Note how for both wrong login and incorrect password, we’re returning the same message in order to disclose as little information as possible. In order to obtain ARN we can navigate back to Lambda configuration page and check it by clicking on API Gateway icon. In order for our endpoint to be consumed from the outside because we have to provide the response in a special format for the API gateway. You can leverage environment variables instead with the help of os package. let delete (id: string ) = For this code to work we’ll need "" package. At this point, our function is open to some vulnerabilities so we have to perform some additional work on our API gateway. func main() { lambda. We’ll need the following package: type Claims struct { Username string ` json:"username"` jwt. Serverless is a great option for smallish nanoservices.Start(HandleRequest) } Argon2 is implemented in "" so the authentication is quite straightforward. Serverless is quite useful for this simple nanoservice. func HandleRequest(ctx context.Error " Forbidden" Minimizing attack surfaceGetenv( " SALT" ) Here’s how you set up them in AWS console. JavaScript Copy Code JavaScript Copy Code F# Copy Code HistoryStill, we need to hash stored password in with the hash function which will allow for the defender to verify password in acceptable time but will require for attacker a lot of resources to guess a password from the hash.[]byte{ 221 , 35 , 76 , 136 , 29 , 114 , 39 , 75 , 41 , 248 , 62 , 216 , 149 , 39 , 248 , 154 , 243 , 203 , 188 , 106 , 206 , 74 , 122 , 47 , 255 , 61 , 173 , 43 , 102 , 173 , 222 , 125 } if credentials.[]byte(salt), 3 , 128 , 1 , 32 ) if areSlicesEqual(key, password) { return " ok" , nil } return " auth failed" , errors.Some super-lightweight service which would check login and password against as a pair of super-user credentials.Context, credentials Credentials) (string, error) { password :=GetRequestHeader " Authorization" let authorizationResult = authorizationHeader |> Result.bind JwtValidator.validateToken authorizationResult let validateToken (token: string ) =try let tokenHandler = JwtSecurityTokenHandler() let validationParameters = createValidationParameters let mutable resToken : SecurityToken = null tokenHandler.The following snippet in the "Resource policy" API gateway settings section allows us to create a whitelist of IP addresses that can access our lambda.(next: HttpFunc) (httpContext : HttpContext) -> let result = AuthApi.authorize httpContext |> Result.bind ( fun _ -> ElasticAdapter.deleteRoute id) match result with | Ok _ -> text " " next httpContext | Error " ItemNotFound" -> RequestErrors.BAD_REQUEST " " next httpContext | Error " Forbidden" -> RequestErrors.Login != login { return " auth failed" , errors.This brings some cost-saving as serverless comes to me almost free due to low execution rate that I anticipate for the admin page of my low-popular service.Due to its minimalistic philosophy, Go is suitable not only for applications that leverage sophisticated concurrency, but also for simple operations as the one described in this post.

Amazon wants to map your home, so it bought iRobot

Amazon now owns four smart home brands (in addition to its Alexa platform, anchored by its Echo smart speakers and smart displays): home security company Ring, budget camera company Blink, and mesh Wi-Fi pioneers Eero. Knowing your floor plan provides context, and in the smart home, context is king This type of data is digital gold to a company whose primary purpose is to sell you more stuff.With detailed maps of our homes and the ability to communicate directly with more smart home devices once Matter arrives, Amazon’s vision of ambient intelligence in the smart home suddenly becomes a lot more attainable. “We really believe in ambient intelligence — an environment where your devices are woven together by AI so they can offer far more than any device could do on its own,” Marja Koopmans, director of Alexa smart home, told me in an interview last month. With context, the smart home becomes smarter; devices can work better and work together without the homeowner having to program them or prompt them to do so. Astro — Amazon’s “lovable” home bot — was likely an attempt at getting that data. Ring’s Always Home Cam has similar mapping capabilities, allowing the flying camera to safely navigate your home. From a smart home perspective, it seems clear Amazon wants iRobot for the maps it generates to give it that deep understanding of our homes.When I spoke to iRobot’s Colin Angle earlier this summer, he said iRobot OS — the latest software operating system for its robot vacuums and mops — would provide its household bots with a deeper understanding of your home and your habits.And in the smart home that Amazon is making a major play for, context is king.While I’m interested to see how Amazon can leverage iRobot’s tech to improve its smart home ambitions, many are right to be concerned with the privacy implications.Add in iRobot and Amazon has many of the elements needed to create an almost sentient smart home, one that can anticipate what you want it to do and do it without you asking.People want home automation to work better, but they don’t want to give up the intimate details of their lives for more convenience.But if I don’t know where the kitchen is, and I don’t know where the refrigerator is, and I don’t know what a beer looks like, it really doesn’t matter that I understand your words.” Each of iRobot’s connected Roomba vacuums and mops trundles around homes multiple times a week, mapping and remapping the spaces.(Currently, users can opt out of Roomba’s Smart Maps feature, which stores mapping data and shares it between iRobot devices.)Instead, it probably picked up the company (for a relative bargain — iRobot just reported a 30 percent revenue decline in the face of increasing competition) to get a detailed look inside our homes.But for a thousand dollars and with limited capabilities (it couldn’t vacuum your home) and no general release date, Astro isn’t getting that info for Amazon anytime soon.Echo smart speakers and now its thorough knowledge of your floor plan, give it a pretty complete picture of your daily life. This is a conundrum throughout the tech world, but in our homes, it’s far more personal.Amazon will need to do a lot more to prove it’s worthy of this type of unfettered access to your home.Amazon’s history of sharing data with police departments through its subsidiary Ring, combined with its “always listening (for the wake word)”The robot has good mapping capabilities, powered by sensors and cameras that allow it to know everything from where the fridge is to which room you are currently in.On its latest model, the j7, iRobot added a front-facing, AI-powered camera that, according to Angle, has detected more than 43 million objects in people’s homes. All this makes it likely this purchase isn’t about robotics; if that’s what Amazon wanted, it would have bought iRobot years ago.

React Context and Hooks

e.g switch a like button useEffect — allows you to handle side effects in react and runs code when component re-renders e.g API calls useContext — allows you to consume context in a functional component e.g sharing data with components. Context API It an easy way to share state within your component tree see it as👉 “All components in your react project have access to your data” so I don’t need to keep passing props all around.React Context and Hooks Here is why I’m writing, while working on a react project I ran into a problem while trying to share data within components, you must have said why didn’t I pass props🤔 yea I did, but I felt there must be a better and cleaner way of sharing the data .“I said functional components” useState — allows you to use state with a component that is handle changes when component updates. Hooks React Hooks allows you to use (useState, useEffect and other react features ) within you fuctional component. photo credit: net ninja Here its is, I have data in App.js, but I need this data in PageView, Navbar, BookList, StatusBar, BookDetails and AddBook.

Context API in React JS

return ( <> <div className="form-group"> <label>Name : </label> <input type="text" onChange={(e) => { setName( }} /> </div> <div className="form-group"> <label>Email : </label> <input type="email" onChange={(e) => { setEmail( }} /> </div> <div className="form-group"> <label>Mobile No : </label> <input type="number" onChange={(e) => { setMobileno( }} /> </div> <div className="form-group"> <input type="submit" value="submit" onClick={() => { showAlert() }} /> </div> </> ) } } export default Formclasscomponent; Listcomponent.jsx import React, { Component } from 'react' import Classcontext from './Classcontext' class Listclasscomponent extends Component { static contextType =Consumer; export class Classcompprovider extends Component { state = { name: "sdfsf", email: "", mobileno: "" } setName = (val) => { this.setState({ name: val }) } setEmail = (val) => { this.setState({ email: val }) } setMobileno = (val) => { this.setState({ mobileno: val }) } showAlert = () => { alert('Show alert') } render() { const { name, email, mobileno } = this.state const { setName, setEmail, setMobileno, showAlert } = this return ( <> <Classcontext. </> ) } Listcomponent.jsx import { useContext } from "react"; import { Funccontext } from "../Context/Funcontext"; export default function Listfunccomp() { const { name } = useContext(Funccontext) return ( <> <p>Name : {name}</p> </> ) } Funcontext.jsx import { createContext, useState } from "react"; export const Funccontext = createContext({}) export default function Funccontextprovider(props) { const [name, setName] = App.js import './App.css'; import Formclasscomponent from './Components/Formcomponent'; import Listclasscomponent from './Components/Listcomponent'; import { Classcompprovider } from './Components/Classcontext'; function App() { return ( <div className="App"> <Classcompprovider> <Formclasscomponent /> <Listclasscomponent /> <Listcomponent /> </Funccontextprovider> </div> ); } export default App; Formcomponent.jsx import { Funccontext } from '../Context/Funcontext' import { useContext } from 'react' export default function Formfunccomp() { const { changeName } = useContext(Funccontext)Provider> </> ) } } export default Classcontext; Formcomponent.jsx import React, { Component } from 'react' import Classcontext from './Classcontext' class Formclasscomponent extends Component { static contextType = App.js import './App.css'; import Funccontextprovider from './Context/Funcontext' import Formcomponent from './Components/Formcomponent' import Listcomponent from './Components/Listcomponent' function App() { return ( <div className="App"> <Funccontextprovider> <Formcomponent /> </Classcompprovider> </div> ); } export default App; Classcontext.jsx import React, { Component } from 'react' const Classcontext =useState() const changeName = (e) => { setName( } return ( <> <Funccontext.Provider value={{ name, changeName }}> {props.children} </Funccontext.Provider> </> ) } Implementation on class component For class component implementation, Create a react project and create the following files inside the src folder -> Components/Formcomponent , Components/Listcomponent and Components/Classcontext . return ( <> <p>Name :</p>{name} <p>Email :</p>{email} <p>Mobile i) Props ii) Context api Props The one way to transfer data between components is sending it through props.No :</p>{mobileno} </> ) } } export default Listclasscomponent; Implementation on functional component Implementation on class component Implementation on functional component Now, Just create a react project and create the following files inside the src folder -> Context/Funcontext , Components/Formcomponent and Components/Listcomponent . return ( <> <input type="text" onChange={(e) => changeName(e)} /> Context api’s To overcome these disadvantages in props, React js introduced a concept called context api where the state and functions will be created and maintained in the context and they can be shared among the components. Implementation The context api can be implemented in both class component and state components. Disadvantages : If we want to share the data in the parent component with the child of the child component(i.e., grandchild).

Entity Framework Core Code First Publishing Multiple Db Contexts in The Same Database

Db Connection String In appsettings.json we will find the target DB connections C# Copy Code { " ConnectionStrings" : { " DatabaseConnection" : " Data Source=.\\SQLEXPRESS;Initial Catalog=Cup;Integrated Security=True" } } Commands C# Copy Code Add-Migration MigrationName Update-Database Script-Migration Drop-Database Remove-Migration MigrationName Migrations History Table The database now contains two different migration tables. A relationship between two tables of two different DB contexts is not possible using code first process If we run Drop-Database at any project, will drop the entire database Multiple Db Context In Same Project By default, the entity framework populates migration files inside the Migrations folder.Solution ASP.NET 6 EF Core 6 This example is also tested in core 5 Reference History Entity Framework Core Code First Publishing Multiple Db Contexts in The Same Database Background I once encountered a scenario, where we had to deploy two code first DB contexts ( EF core 3 and 5 ) of two different projects in the same SQL Server database. Db Context We will find the DB context inside Db.App project. Limitations It is better not to use the same table names in both DB contexts. C# Copy Code Add-Migration MigrationName -c DbContextName -o Another Db Context We will find the DB context inside Db.ProjectFolderName\Migrations\DbContextName.sql Remove-Migration MigrationName -Context DbContextName About Code Sample Visual Studio 2022[Table( " FtpFiles" , Schema = " log" )] public class FtpFile { public int Id { get ; set ; } public string Name { get ; set ; } } public class AppDb : C# Copy Code using Microsoft. C# Copy Code using Microsoft.[Table( " Teams" )] public class Team { public int Id { get ; set ; } public string Name { get ; set ; } } public class AppDb :GetConnectionString( " DatabaseConnection" ), d => { d.MigrationsHistoryTable( " __EFMigrationsHistory_FtpLog" ); } ); } } } Let's add migration and update the database in the project.GetConnectionString( " DatabaseConnection" )); } } } This is a basic Entity Framework DB context.This process will add a new table Teams and a migration log table __EFMigrationsHistory to the database.DbContext { public DbSet<Team> Teams { get ; set ; } protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) { var config = new ConfigurationBuilder() .AddJsonFile(Path.DbContext { public DbSet<FtpFile> FtpFiles { get ; set ; } protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) { var config = new ConfigurationBuilder() .AddJsonFile(Path.If multiple Db contexts are present in the same project, they will use the same migration folder.To populate migrations of different Db contexts in different folders, we can use listed commandsLet's add migration and update the database in the project.This process will add a new table FtpFiles with log schema.To make things more manageable I was thinking of using different migration tables.

Are you technocentric? Shifting from technology to people

View computing tasks as being implemented (enacted) by writing code, rather than seeing computing activities as rich and complex jumbles of meaning-making and communication that involve people using chatter, images, and lots of gestures Anchor learning in concepts and skills, rather than placing the values and viewpoints of learners at the heart of teaching Examples of technocentrism and how to overcome it Pratim recounted several research activities that he and his team have engaged with. Our seminars on teaching cross-disciplinary computing Between May 2022 and November 2022, we are hosting a new series of free research seminars about teaching computing in different ways and in different contexts.To me, it appears to be a way of looking at how we learn about computer science, where one might: Focus on the finished product (e.g. a computer program), rather than thinking about the people who create, learn about, or use a program Ignore the context and the environment, rather than paying attention to the history, the political situation, and the social context of the task at handClick to enlarge As a researcher of pedagogy, these points provide takeaways that I can relate to my own research practice: Code is a voice within an experience rather than symbols at a point in time. Listening to Pratim share his research on the teaching and learning of computing and the pitfalls of technocentrism has made me think deeply about how I view computer science as a subject and do research about it. Figure 2: Two graphs from students showing different representations of a context, and a researcher’s bar chart representing how students’ shared understanding emerged over time (Sengupta, 2022). My takeaways Pratim shared four implications of this research for computing pedagogy (see Figure 5). Prof. Pratim Sengupta Pratim revealed a complex landscape where we as educators can be easily trapped by what may seem like good intentions, thereby limiting learning and excluding some students. Join our next seminar We have another four seminars in our current series on cross-disciplinary computing. At our next seminar on 12 July 2022 at 17:00–18:30 BST / 12:00–13:30 EDT / 9:00–10:30 PDT / 18:00–19:30 CEST, we will welcome Prof. Yasmin Kafai and Elaine Griggs, who are going to present research on introductory equity-oriented computer science with electronic textiles for high school students. In the first example research activity, Pratim explained how in maths and physics lessons, middle school students were asked to develop models to solve time and distance problems. Figure 4: To facilitate discussion of racial segregation, a simulation was used that bridges abstracted dots and real people, giving pre-service teachers a space to reflect on discrimination (Sengupta, 2022). In a second example research activity, students were asked to build a machine that draws shapes using sensors, motors, and code. Pratim started the seminar by giving us an overview of some of the key ideas that underpin the way that computing is usually taught in schools, including technocentrism (Figure 1). Figure 1: The features of technocentrism, a way of thinking about how we teach computing, particularly programming (Sengupta, 2022). In a third example research activity, racial segregation of US communities was discussed with pre-service teachers. Figure 3: Students used physical movements and user guides to be with others and publicly share and experience the task with authentic users (Sengupta, 2022). If you would like to find out more about Pratim’s work, please look over his slides, watch his presentation, read the upcoming chapter in our seminar proceedings, or respond to this blog by leaving a comment so we can discuss!Pratim and his team challenge how we focus on making technological artefacts — code for code’s sake — in computing education, and refocuses us on the human experience of coding and learning to code. We should listen carefully and attentively to teachers, rather than making assumptions about what happens in classrooms. Code lives as a translation bridging many dimensions, such as data representation, algorithms, syntax, and user views. [1] You can learn more in the Hello World article where our Chief Learning Officer Sue Sentance talks about the block model.When we teach children and young people about computing, do we consider how the subject has developed over time, how it relates to our students’ lives, and importantly, what our values are? Uncertainty and ambiguity exist in learning, and this can take time to recognise.Professor Pratim Sengupta shared some of the research he and his colleagues have been working on related to these questions in our June 2022 research seminar. I have come to a simplified understanding of technocentrism. Pratim is a learning scientist based in Canada with a long and distinguished career.For example, when I listen to students predicting what a snippet of code will do, I think of the active nature of each carefully chosen command and how for each student, the code corresponds with them differently.Grounded in working with teachers and students, he brings together computing, science, education, and social justice.In this blog post, particularly for those unable to attend this stimulating seminar, I give my simplified view of the rich philosophy shared by Pratim, and my fledgling steps to admit to my technocentrism and overcome it.

Introduction to observability: What it is and why it’s important

Observability best practices There are a few best practices that you should follow when implementing observability, including the following: Three pillars of observability Remember the three pillars of observability: logs, metrics, and traces. There are numerous benefits when you follow good observability practices and bake them directly into your software system, including the following: Releases are faster When you know more about your system, you can iterate quicker. Observability, when done correctly, gives you incredible insights into the deep internal parts of your system and allows you to ask complex, improvement-focused questions, such as: Where is your system fragile? Maintain unique IDs throughout the system In systems where multiple parts of the system need to communicate, one single event may commonly be aliased. Conclusion In this article, you learned about the importance of observability and the common questions that regularly appear when encountering observability, such as why it’s important and what problems it solves. Observability vs. monitoring Monitoring and observability are often confused; however, it’s important to understand their differences so that you can implement both accurately. It helps you decide what to work on As previously stated, with the extra information you gain from good observability practices, you’re able to decide what you need to work on. Incidents become easier to fix When you have clear insights and data for key parts of your code and business, you provide your developers with the context and information they need to fix things. Don’t throw away context For your system to truly be observable, you need to maintain as much context as possible. Observability Observability generally deals with unknown unknowns. Having key information, such as the following, allows you to significantly reduce your mean time to recover from an incident: How do you replicate the incident? Context includes things like the following: The time of your event. Observability can help you ask and answer important questions about your software system and all the different states it can go through by observing it. Conduct A/B testing A/B testing is an important tool to drive improvements in your product and your code. Good observability takes the guesswork out of this process and can offer far more context, data and assistance to resolve issues in your system. Observability is how well you know what’s happening inside of your software system without writing new code. Monitoring Monitoring deals with known unknowns. By observing your system, you can make changes to your system/refactoring and directly measure the customer impact. All these are important questions to ask and can be answered with data-driven information created by implementing good observability practices. For instance, if your system starts to get an error at a certain time, context could be the key to truly observing and deciphering the cause. In this article, you’ll learn what observability is, why it’s important and what kinds of problems observability helps solve. If you were asked which of your microservices are experiencing the most errors, what the worst-performing part of your system is, or what the most common frontend error your customers are experiencing, would you be able to answer those questions? Each of these serves as a useful and important part of the observability of your system. With good observability, you’ll know what your customers’ biggest frustrations are, and this information can help drive your product roadmap or bug backlog. For instance, if a certain bug affects only 0.001 percent of the customer base, occurs in a rarely used language, and is easily fixed by a refresh, it makes sense to focus on more severe system bugs. According to Stripe’s The Developer Coefficient report, good observability saves around 42% of a company’s developer time, including debugging and refactoring. However, now, software offerings, frameworks, paradigms and libraries have hugely increased the complexity of their systems due to things like cloud infrastructure, distributed microservices, multiple geo locations, multiple languages, multiple software offerings, and container orchestration technology. Observability is also vitally important for new software practices. You need to ensure that all the different parts of your system are speaking one unified language. Good observability allows you to make data-driven, positive business outcomes. Each of these offers unique and powerful insights into your system and can help you improve it. Monitoring is important but is different from observability. What problems does observability help solve? Why is observability important? For example, if your frontend page sends a customer to a payment page, you may have a unique ID for the customer that is hard to correlate to the payment they just made. Observability is critical to the success of any application.These are all different types of time-series data and can help improve your system’s observability. What is observability? You can get rid of the poorly performing version of your test and use your A/B test to drive your positive key performance indicator (KPI) metrics.You begin to understand your system more deeply, and when you gain a deep, intricate view of your system, you can identify your holes and where you need to improve. There were many cases when the (suspected) reason was fixed, passed QA, and released, but the developer was wrong, and the process had to start all over again. An example would be to move the navigation of your site from the footer to the header, where most sites normally place it. Does any code need to be reworked/rewritten? For instance, I have experience working at a multibillion-dollar company with millions of concurrent users. Does a service error occur when you replicate the incident? This is less reactive and is normally broadly termed discovery work.For example, you may not even know you don’t have much information in your payments backend system, and this is where observability comes into play.

Mastering Redux with Redux toolkit, write a correct redux!

However, personally, I still think learning Redux will benefit us as a Frontend engineer, especially when it’s the most popular and most used by a lot of big companies. Redux is a popular state management library, used by numerous enterprise companies.

How to interrogate unfamiliar code

If the most recent commit for that line of code isn’t meaningful—say, it’s a formatting or whitespace change—you may have to look through the file’s change history to find the commit where the line of code was introduced.Look at how the code is used Most code is used by other code. Once you’ve got a PR and ticket in hand, you not only have valuable context from the time the code was written, you’ve found the names of everyone who had a hand in it: the code’s author, PR reviewers, anyone who commented on or updated the ticket, the person who signed off on QA.Books and tutorials typically focus on the craft of code, the programmer’s ability to theorize, write, and modify code in an effective and readable way, rather than the far more common activity of reading and interpreting code that already exists.Read the code at least twice One read-through is almost never enough to fully understand what a piece of code is doing.Search for similar code Sometimes code is hard to understand even if all the identifiers are well-named and the use cases are familiar. The good news is that truly unique code is rare in long-lived codebases, especially at the grain of a single expression or line of code. 7. Use the debugger Once you have some unit tests (or even just a simple one that executes the code without assertions), you’ve got a great setup for step-by-step debugging. If you know which user actions trigger the code in question, you can set your breakpoint and run the program normally, interacting with its interface to make the code run. Documentation may explain the “how” of a piece of code, but it’s often better at explaining “why.”Set a breakpoint (most IDEs let you do this by clicking next to the line number in the code editor) or add a breakpoint/debugger statement at the top of the piece of code.Keep in mind that documentation shouldn’t be your only source of truth—it starts going out of date the moment it’s published, and the only thing you can fully rely on to tell you how a piece of code behaves is the code itself. Debugger: lets you set breakpoints in your code so you can step through a particular process one line at a time and inspect the values in scope. Your version control system (Git, Subversion, Mercurial or whatever you use) has a tool that reveals the author and commit for any line of code in the codebase.Run unit tests In a perfect codebase, unit tests would be all you’d need to understand the behavior of any section of code.Programmers who are good at writing code are valuable, but programmers who are good at reading code are arguably even moreso. Code hinting: shows information (such as types, parameters, and handwritten documentation) about a class/method/field when you hover your cursor on it.Refactor local variable and method names Sometimes a piece of code is so vague or misleading it’s hard to reason about. Tests take time to write but are far more effective than running code in your imagination. New developers sometimes feel that if they don’t spend a majority of their time adding new code to a project, they’re not being productive. Contextual navigation: provides menu options like “Jump to Definition,” “See Implementations,” and “See References” when you open the context menu (right click) on an identifier. The ability to read code effectively is a secret weapon that will speed you through technical interviews and make you an essential member of any team. By now you’ve learned everything the code itself will tell you, as well as everything Google, Stack Overflow, and your team’s documentation will tell you. Top-to-bottom debugging may be less useful for code that runs tens or hundreds of times, like a nested loop.Readable code is great, but not all code will be immediately readable. Version control integration: helps you sync and merge code with the rest of your team. Reading code is time-consuming and often boring as well. A third read-through is valuable if the code contains complex logic. Understand first, write code second A popular claim about code is that it’s read ten times as often as it’s written. Look for the following features: Syntax highlighting: shows keywords, class/method/field/variable names, and brackets in different colors to aid comprehension. You can read code without these tools, and sometimes you might have to.Install useful plugins Your IDE is an invaluable tool for understanding code. In this article, I’ll explain the most practical code-reading tactics I’ve picked up over the course of my career. In light of that information, it seems we underinvest in the skill of understanding code.Sometimes it’s even 100%—in the process of studying existing code, you may learn enough to be able to say “this feature already exists, we’ve just forgotten about it” or “this will do more harm than good.” For example, consider the following piece of JavaScript: function ib(a, fn) { return ( Static analysis: alerts you to problems in your code without actually running it.By looking up the commit hash in your team’s version control or project management app, you should be able to find the original pull request that included the code, and from there you can hopefully follow a link to the original ticket where the feature or bug fix was requested. So a bit of renaming gets us here: function ib(array, fn) { return (array Tags: reading code Remember that there’s no magic in code! If your team uses a knowledge base like Stack Overflow for Teams, Confluence, or a GitHub wiki, by now you should have a pretty good idea of what terms or concepts you could search for to find relevant documentation.One last way to gather context is to track down the original author, commit message, and project management ticket associated with that code. Ideally your IDE will let you right-click the method name (or click a context hint button) and select “See References”.Effective coding requires both context and confidence: you need to understand the environment your code will live in and feel sure your work adds value.Spend time diving into each of these so you can understand all the logic at play, even if it lives outside the code you’re studying.You could extract the code to a sandbox and run it there—sometimes this is the right move—but as long as you’re exploring its behavior, you might as well use a test runner.It’s true that reading code is a skill that lies downstream from writing it—if you can write Python, it stands to reason that you can understand it.If you’re struggling with a piece of code but you understand a situation where it’s used, that can be valuable context for figuring out what it’s doing.The first 80 to 95% of the time you spend on a task should be spent reading code and other forms of documentation. Refactoring: automates common refactors like extracting logic to a method, changing method parameters, or renaming a variable.My go-to is JS Powered Search, a VS Code extension that lets you define a logical search query in JavaScript (full disclosure: I am the author of JSPS).This is generally used as an argument for fastidious coding: you’re going to be seeing your code a lot, so spend some extra time on quality.And even then there may be missing pieces to the puzzle: a bizarre design decision, a method that breaks patterns the rest of the codebase follows, a code smell with no obvious justification.Also provides information about the author and last edit date of each line of code.Make sure you understand each line of code or at least have a theory about it. If your IDE doesn’t have that feature but you’re working in a compiled or transpiled language, another trick you can use is to rename the method to something ridiculous like ThisBreaksOnPurpose .Renaming a few identifiers has helped us understand the code without changing its logic or having to think about all of its parts at once.If you take a few minutes to search for similar code in the project, you might find something that unlocks the whole puzzle.For code like this you may want to add variables that aggregate data on each iteration so you can look at them afterward.Sometimes you understand what a piece of code is doing, but something about it just doesn’t seem right.And in the worst-case scenario, the code in question is either unique to the codebase you’re working in or there’s no obvious phrase you can Google to learn more about it.Any time you invest here will pay dividends as you and your team interact with the code in the future.It’s totally normal to read through a piece of code ten times or more before you really get it.[]).reduce((dict, element) => { dict[fn(element)] = element; return dict; }, {}); } You can see now that fn is used to turn an array element into a dictionary key. Occasionally, even a regex won’t narrow things down enough, and nobody wants to spend several hours sifting through search results for something that may not even help.Still, it’s a good idea to check for tests that execute the code you’re studying.Choose a snippet of code that stands out and paste it into the universal search pane in your IDE (often bound to the ctrl + shift + F shortcut).Before you move on, consider refactoring the code for clarity, creating new documentation, or even just sending out an email with your findings.i; return o; }, {}); } It’s very hard to read and the name ib is useless at helping you understand what it does. A good piece of internal documentation may also point you toward a teammate who knows what’s going on.Improving code readability will benefit the entire team over and over again, even if it doesn’t add or change functionality.Search tools usually include a “whole word” search option, meaning that a search for care.exe won’t return results like scare.exertion .Choose simple values for any parameters or variables and imagine them flowing through the code from top to bottom.And if you end up needing to modify the code, your tests will give you confidence that you’re not breaking it.Paste the code in question for them to see; there’s a good chance they’ll notice something you didn’t.If not, you may end up with a larger result set and have to dig through a lot of code that isn’t relevant. Test runner: provides a UI for running unit and integration tests and reports the results.When you finish, you’ll have theories about possible behaviors, edge cases, and failure conditions in the code. If neither of these is possible, you can fall back to a text search for the method name.IntelliJ IDEA live and die by the strength of their code parsing abilities and the size of their plugin libraries. The goal is to narrow the search down to a few files that are most likely to mirror the process you’re studying.They’re actual evidence that the code works a certain way.Editors like Visual Studio, VS Code, Eclipse, andYou can make some inferences about it, though: Since reduce is being called on a (and it falls back to an empty array), a is meant to be an array type.Write a test or two to answer the questions you still have about the code. The context and understanding you’ve gained over the course of these steps is likely to be valuable in the future. Auto-formatting: modifies whitespace, line length, and other elements of style to be more readable and consistent.But reading code is also a skill on its own.Your goal is to be able to summarize what the code does.Once you do that, you’ve got another perspective for understanding the code. is being called on (and it falls back to an empty array), is meant to be an array type.Explain the overall purpose of the code in basic terms.Reading code is what will get you there.

Researchers announce new AI-based technology that can create short videos based on single images

The new technology, demonstrated using Google's DeepMind AI platform, functions by analyzing a single photo context image to obtain key pieces of image data and generate additional images. The framework marks a huge step in video technology by providing the ability to generate reasonably accurate video based on a very limited set of data.Earlier this week, Google scientists announced the creation of Transframer, a new framework with the ability to generate short videos based on singular image inputs.New work shows it excels in video prediction and view synthesis, and can generate 30s videos from a single image: 1/ — DeepMind (@DeepMind) August 15, 2022The resulting videos move around the target image and visualize accurate perspectives despite having not provided any geometric data in the original image inputs. Just as Transformer uses language to predict potential outputs, Transframer uses context images with similar attributes in conjunction with a query annotation to create short videos.The new technology could someday augment traditional rendering solutions, allowing developers to create virtual environments based on machine learning capabilities. Transframer is a general-purpose generative framework that can handle many image and video tasks in a probabilistic setting.The prediction models the probability of additional image frames based on the data, annotations, and any other information available from the context frames.Technologies such as Transframer have the potential to offer developers a completely new development path by using AI and machine learning to build their environments while reducing the time, resources, and effort needed to create them.

Story Of Core JavaScript. (0) Exploring ReactJS as a Back-end engineer.

Functions ( Higher-Order, Anonymous, Call Back ) Array methods, filter, reduce, map. So, I was a back-end developer working with GO (go long) and Python, saying honestly, I was a pure backend engineer, responsible for the back-end system, the script, and sometimes the deployment. As a GO / Python developer, I was already familiar with the syntax and structure, but It was a bit easy for me to move from a statically-typed language to a dynamic language because I had a quite good experience with python. How single-thread synchronous JavaScript works as an asynchronous multi-thread language and how well the execution context works. Lastly, That core concept helped me to go into the 2nd Phase where I learned ES6.So I started learning the core first, like Execution context How this works, What is the event loop, How call-stack is working the total underlying concept of JavaScript. After learning these things, I had the whole architecture of JavaScript in my head. How asynchronous JS is working. Learning about the V8 engine. Why web pack( babel is needed). Execution Context. How Class is working. Phase 0. Event Loop. Call Stack. etc. Scope and Variables.

14 Free OSINT Tools for Adding Context in Person-of-Interest Investigations

As the specialisation of online investigations grew beyond the military and law enforcement into private sectors, the term has been adopted by trained practitioners (for example cyber security experts or open-source intelligence analysts) conducting specialised investigations of persons. Person-of-interest (POI) – Definition Person-of-interest is a term originally widely used by law enforcement and intelligence officials to identify someone linked to and/or in possession of information pertinent to an ongoing criminal investigation.Law enforcement agencies, private companies, or security consultancies alike suffer from shortage of personnel, time, and budget to run comprehensive investigations that could match the scale of their task.

Laravel BDD(行為驅動開發)

'/../../bootstrap/app.php'; // 自訂義要載入的 env file $app->loadEnvironmentFrom(self::ENV_FILE); $app->make(Kernel::class)->bootstrap(); return $app; } /** * @BeforeScenario */ public function before(): void { if (!static::$contextSharedApp) { parent::setUp(); static::$contextSharedApp = $this->app; } else { $this->app = static::$contextSharedApp; } } /** * @AfterScenario */ public function after(): void { if (static::$contextSharedApp) { parent::tearDown(); static::$contextSharedApp = null; } } } 定義完之後我們可以在 features 的資料夾新增一個 register.feature 檔案, 用以描述註冊會員的時候會有哪些操作以及描述規格書, 而 feature 的檔案我們也可以設定他要使用的語系, 我們這邊設定成繁體中文, 假如設定成繁體中文的話, Feature (功能), Given(假定), When(當), Then(那麼), And(而且) #language: zh-TW @auth @authRegister 功能: 使用者註冊會員 @authRegisterNoData 場景: 前端沒有傳入任何參數 假定 API 網址為 "/v2/auth/register" 當 以 "POST" 方法要求 API 那麼 回傳狀態應為 400 而且 回傳的錯誤訊息為 "請輸入帳號或者密碼" ps: 這邊看到的 @auth, @authRegister 為 tag, 主要用針對 Feature 或者 Scenario 下標籤, 這樣到時只執行特定標籤等 這邊定義完之後我們接著在根目錄下新增 behat.yml 檔案,用以設定 behat 的一些基本設定 default: suites: auth_features: paths: - "%paths.base%/features/tests/auth" contexts: - ApiFeatureContext # API 相關資訊 - DatabaseAssertionContext # Database 相關 設定完之後再執行一次 vendor/bin/behat --init 此指令會幫我們產生 ApiFeatureContext 以及 DatabaseFeatureContext ApiFeatureContext 如下 /**[]; protected $response; /** * @Given API 網址為 :apiUrl * * @param string $apiUrl */ public function apiUrl(string $apiUrl) { $this->apiUrl = $apiUrl; } /** * @Given API 附帶資料為 * * @param TableNode $tableNode */ public function apiBody(TableNode $tableNode) { $this->apiBody = $tableNode->getHash()[0]; } /** * @When 以 :method 方法要求 API * * @param string $method */ public function request(string $method) { $this->response = $this->json($method, $this->apiUrl, $this->apiBody); } /** * @Then 回傳狀態應為 :statusCode * * @param int $statusCode */ public function assertStatus(int $statusCode) * * @param string $tableName - 資料表名稱 * @param TableNode $tableNode */ public function assertTableRecordExisted(string $tableName, TableNode $tableNode) { $this->assertDatabaseHas($tableName, $tableNode->getHash()[0]); } } 其實到這邊我們可以發現 Context 主要就是定義會出現在 Feature 上面的文案 接著可以透過以下指令進行跑 BDD 測試 */ public function __construct() { } } 但是在這邊我們要自己設定測試的 setup 以及 teardown method 以及自定義 env file <?php use Behat\Behat\Context\Context; use Behat\Gherkin\Node\PyStringNode; use Behat\Gherkin\Node\TableNode; use Illuminate\Contracts\Console\Kernel; /**看完了基本結構後我們發現此結構有 Feature, Scenario, Given, When, Then, And 等單字, 我們接下來對各單子進行解說 Feature: 簡單說明規格書功能 Scenario: 測試的案例 Given API 網址為 "/v2/auth/register" When以 "POST" 方法要求 API Then 回傳狀態應為 400 And 回傳的錯誤訊息為 "發生錯誤" 了解基本的 BDD 基本結構後我們直接來進行實作,以下為使用 Laravel 進行 BDD。 首先我們先用 composer 安裝 BDD 的套件 composer require behat/behat --dev 接著進行初始化 ./vendor/bin/behat --init 初始化完後會產生一個 features 的資料夾 Ps: 裡面只會有 features/bootstrap/FeatureContext.php, 其餘的檔案是我自己額外新增的!!!! { $this->response->assertStatus($statusCode); } } DatabaseFeatureContext 如下 /** * Defines application features from the specific context. */ class FeatureContext extends \Illuminate\Foundation\Testing\TestCase implements Context { protected const ENV_FILE = '.env.behat'; /** * @var \Illuminate\Foundation\Application */ protected static $contextSharedApp; /** * @return \Illuminate\Foundation\Application */ public function createApplication() { $app = require __DIR__ . */ class DatabaseAssertionContext extends FeatureContext { /** * @Then 資料表 :tableName 應有資料 */ class ApiFeatureContext extends FeatureContext { protected $apiUrl = ''; protected $apiBody =

Reddit acquires contextualization company Spiketrap to boost its ads business – TechCrunch

“Our goal has always been to contextualize language at scale and in realtime to help creators, brands, and platforms genuinely understand and meaningfully engage their audiences,” said Kieran Fitzpatrick, CEO and co-founder of Spiketrap, in an announcement.We believe targeting relevant audiences based on interests and with the context of the conversations they are engaging in helps ensure advertisers are reaching the right people in the most efficient ways,” said Reddit EVP of Ads Monetization, Shariq Rizvi, in a statement about the deal. In particular, it promotes technology that can do things like figure out the entities a piece of content may be referring to, even if they’re not explicitly mentioned in areas like movies, TV shows, games, franchises and more.Deal terms were not disclosed, but Reddit says Spiketrap’s AI-powered contextual analysis and tools will help Reddit to improve in areas like ad quality scoring and will boost prediction models for powering auto-bidding. The deal signals Reddit’s growing investment in its advertising business as it aims to make it easier for advertisers to target relevant audiences based on interests.Combined with its knowledge graph, the company offers a range of solutions in contextual ad targeting, impact measurement, brand safety monitoring and other research and data-as-a-service solutions, it says.Meanwhile, its Emotion AI is also able to detect the sentiment around a post, like excitement, sarcasm or toxicity — the latter an area Reddit has struggled with over the years, often having to shut down hateful and deplorable subreddits, those that engage in harassment and others that advertisers would want to avoid.With the former, Reddit could use the ML technology to improve its capabilities across a range of areas — like the recommendations for its newer Discover tab, plus its safety work and targeted ads business.This deal also arrives at a time when Apple’s consumer privacy tools, App Tracking Transparency or ATT, have been impacting the effectiveness of online ads across major tech platforms, like Facebook and Snap, as consumers opt out of ads personalization. Reddit says the Spiketrap team has already joined the company and will spearhead a number of projects across its ads business going forward.The company touts its proprietary Clair AI technology, which is able to extract the “signal from the noise” of unstructured datasets. Today’s announcement follows other recent acquisitions by Reddit, including Spell, a platform for running machine learning experiments, in June, and natural language processing company MeaningCloud in July. the alternative news website

This is an app that can be used by anyone, without any cost. It is an artificial intelligence assistant that will read news for you and provide you with relevant information while generating revenue.