11+ IT security assessment Jobs in Bangalore (Bengaluru) | IT security assessment Job openings in Bangalore (Bengaluru)
Apply to 11+ IT security assessment Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest IT security assessment Job opportunities across top companies like Google, Amazon & Adobe.
- Threat and vulnerability analysis.
- Investigating, documenting, and reporting on any information security (InfoSec) issues as well as emerging trends.
- Analysis and response to previously unknown hardware and software vulnerabilities.
- Preparing disaster recovery plans.
SOC analysts are considered the last line of defense and they usually work as part of a large security team, working alongside security managers and cybersecurity engineers. Typically, SOC analysts report to the company’s chief information security officer (CISO).
SOC analysts need to be detail oriented because they are responsible for monitoring many aspects simultaneously. They need to watch the protected network and respond to threats and events. The level of responsibility typically depends on the size of the organization.
Hi All,
Key Responsibilities:
· Sales Strategy Development: Develop and implement comprehensive sales strategies for the NBFC and BFSI verticals to meet and exceed business objectives and revenue targets.
· New Business Development: Identify, target, and engage potential clients in the NBFC and BFSI sectors. Build and nurture long-term relationships with key stakeholders, including decision-makers within financial institutions, banks, NBFC, insurance companies, and related businesses.
· Account Management: Take ownership of key accounts and ensure customer satisfaction by providing tailored solutions, managing the sales lifecycle, and ensuring the timely delivery of services.
· Market Analysis: Continuously assess market trends, competitor activities, and customer needs to adjust the sales approach and identify new opportunities for business growth.
· Collaboration: Work closely with marketing, product development, and other cross-functional teams to ensure the delivery of solutions that meet customer needs and expectations.
· Sales Reporting & Forecasting: Regularly report on sales performance, pipeline progress, and forecasts to senior management. Provide actionable insights based on sales data.
· Contract Negotiation & Closing: Lead the negotiation process, drafting proposals, and closing deals in alignment with the company's objectives.
Job Description: Network Fresher
Role: Network Fresher
Experience: Fresher (Will be working as a Trainee for 1 year)
Location: Bangalore
Notice Period: Immediate
Shift Timings and Working Days: Rotational Shifts & 6 Days working (complete work from office)
Current Location: Candidates must be currently located in Bangalore
Required Skills:
- Basic understanding of networking concepts and protocols.
- CCNA training is mandatory.
- Knowledge of Linux, server management, AWS, and cloud computing.
- Strong analytical and problem-solving skills.
- Ability to work in rotational shifts.
- Excellent verbal and written communication skills.
Educational Qualification:
- Graduation must (B.Tech / B.Sc / B.E / BCA). Candidates should have a provisional or passing certificate.
Job Description:
Please find below details:
Experience - 5+ Years
Location- Bangalore/Python
Role Overview
We are seeking a skilled Python Data Engineer with expertise in designing and implementing data solutions using the AWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes.
Key Responsibilities
- Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis.
- Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses).
- Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions.
- Monitor, troubleshoot, and enhance data workflows for performance and cost optimization.
- Ensure data quality and consistency by implementing validation and governance practices.
- Work on data security best practices in compliance with organizational policies and regulations.
- Automate repetitive data engineering tasks using Python scripts and frameworks.
- Leverage CI/CD pipelines for deployment of data workflows on AWS.
Required Skills and Qualifications
- Professional Experience: 5+ years of experience in data engineering or a related field.
- Programming: Strong proficiency in Python, with experience in libraries like pandas, pySpark, or boto3.
- AWS Expertise: Hands-on experience with core AWS services for data engineering, such as:
- AWS Glue for ETL/ELT.
- S3 for storage.
- Redshift or Athena for data warehousing and querying.
- Lambda for serverless compute.
- Kinesis or SNS/SQS for data streaming.
- IAM Roles for security.
- Databases: Proficiency in SQL and experience with relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., DynamoDB) databases.
- Data Processing: Knowledge of big data frameworks (e.g., Hadoop, Spark) is a plus.
- DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline.
- Version Control: Proficient with Git-based workflows.
- Problem Solving: Excellent analytical and debugging skills.
Optional Skills
- Knowledge of data modeling and data warehouse design principles.
- Experience with data visualization tools (e.g., Tableau, Power BI).
- Familiarity with containerization (e.g., Docker) and orchestration (e.g., Kubernetes).
- Exposure to other programming languages like Scala or Java.
Years of Experience:2-5 Years
Notice: Immediate to 30 Days
Requirements:
-
3+ years of web development experience using Node.js or similar web technologies
-
Well-versed with front-end code in HTML5, CSS3, Javascript, React.js with familiarity in various frameworks and template languages
-
Possess strong understanding of Object-Oriented Programming.
-
Proficient with database design, optimization and tuning in MySQL or MongoDB
-
Experience in design patterns, unit testing, automation techniques (Selenium WebDriver)
-
Exposure to Amazon Web Services (EC2, S3, EBS, RDS, SQS, Redshift, etc.)
-
Exposure to Docker and Kubernetes
-
Exposure to collaborating tools like GitHub, JIRA, Confluence
-
Experience in frameworks such as Symfony 2, Express.js, or proven ability to learn on the job
-
Experience in Microservices and REST architecture
-
Exposure to Scrum methodology and XP technical practices such as unit testing, pair programming, test-driven development, continuous integration or continuous delivery
-
Self-motivated, fast learner, detail-oriented, team player and a sense of humor.
We're looking for highly skilled experienced engineers to design and build high-scale, cloud-based data processing systems that can handle massive amounts of data with low latency. You'll work with a team of smart, motivated, and diverse people and be given the autonomy and support to do your best work. This is a rare opportunity to make a meaningful impact in society while working in a dynamic and flexible workplace where you'll belong and be encouraged.
Qualifications:
- Bachelor's Degree required
- Significant experience with distributed systems.
- Experience with modern programming languages such as Java, C#, C/C++, or Ruby.
- Experience with container platforms such as DC/OS, Kubernetes
- Fluency in technologies and design concepts around Big Data processing and relational databases, such as the Hadoop ecosystem, Map/Reduce, stream processing, etc.
- Experience with production operations and good practices for putting quality code into production and troubleshooting issues when they arise.
- Effective communication of technical ideas verbally and in writing, including technical proposals, design specs, architecture diagrams, and presentations.
- Ability to collaborate effectively with the team and other stakeholders.
- Preferably, production experience with Cloud and data processing technologies.
Responsibilities:
As a member of the software engineering division, you will take an active role in the definition and evolution of standard practices and procedures. Define specifications for significant new projects and specify, design and develop software according to those specifications. You will perform professional software development tasks associated with the developing, designing and debugging of software applications or operating systems.
- Design and build distributed, scalable, and fault-tolerant software systems.
- Build cloud services on top of the modern OCI infrastructure.
- Participate in the entire software lifecycle, from design to development, to quality assurance, and to production.
- Invest in the best engineering and operational practices upfront to ensure our software quality bar is high.
- Optimize data processing pipelines for orders of magnitude higher throughput and faster latencies.
- Leverage a plethora of internal tooling at OCI to develop, build, deploy, and troubleshoot software.
Key Responsibilities:
- Rewrite existing APIs in NodeJS.
- Remodel the APIs into Micro services-based architecture.
- Implement a caching layer wherever possible.
- Optimize the API for high performance and scalability.
- Write unit tests for API Testing.
- Automate the code testing and deployment process.
Skills Required:
- At least 2.5 years of experience developing Backends using NodeJS — should be well versed with its asynchronous nature & event loop, and know its quirks and workarounds.
- Excellent hands-on experience using MySQL or any other SQL Database.
- Good knowledge of MongoDB or any other NoSQL Database.
- Good knowledge of Redis, its data types, and their use cases.
- Experience with graph-based databases like GraphQL and Neo4j.
- Experience developing and deploying REST APIs.
- Good knowledge of Unit Testing and available Test Frameworks.
- Good understanding of advanced JS libraries and frameworks.
- Experience with Web sockets, Service Workers, and Web Push Notifications.
- Familiar with NodeJS profiling tools.
- Proficient understanding of code versioning tools such as Git.
- Good knowledge of creating and maintaining DevOps infrastructure on cloud platforms.
- Should be a fast learner and a go-getter — without any fear of trying out new things Preferences.
- Experience building a large scale social or location-base
About the role:
We are looking for an experienced Software Development Engineer II (SDE2) to help deliver high visibility and impact features for the ChakraHQ Platform. ChakraHQ Platform is the world's first Omnichannel Process Automation Platform. Imagine AWS but for operations and business teams; built on cutting edge technology stacks, to solve problems for modern businesses.
Technology specialists at ChakraHQ are at the core of the company's decisions and vision. As an SDE2, you will contribute to the design and development of the core technology stack of the ChakraHQ Platform. Working closely with Engineering, Product Management, Sales and Customer Success you will take ownership to develop features, plugins and custom fixes that will materially impact customer’s and ChakraHQ’s business. You will also be responsible for maintaining a streamlined build and CI/CD system. You will help coordinate the incorporation of upstream features into the ChakraHQ Platform.
Technologies you will work with: React, Javascript, Android, iOS, PostgreSQL, Serverless, AWS, Google Cloud
Responsibilities:
- Design, develop and maintain features, services, products that are part of ChakraHQ
- Own delivery of said features and services
- Own success of the products by tracking its lifecycle with customers
- Build systems that scale horizontally
- Implement clean and modern mobile interfaces that provide an excellent user experience.
- Write automated tests to ensure code quality
- Work with customers to build a product roadmap
- Work with sales & marketing to sell your product to end-users
- Work as an integral part of an agile software development team to build features end-to-end
- Support those features in the ChakraHQ production environment by participating in an on-call rotation
Position Requirements:
- Bachelor's degree in Computer Science.
- 3+ years of experience working on teams to develop and deploy web or mobile applications
- Expertise with Javascript
- Knowledge of frameworks such as React.js is a big plus
- Ability to write code compatible across browsers and other clients
- Good understanding of backend systems i.e. web services, APIs from a consumer perspective
- Proficiency with git and Github workflows
- Expertise with test-driven development and automated testing
- Excellent analytical and problem-solving skills
- Excellent communication skills and fluent English
- Open to learn and work on new technologies
- Administration and Support for Azure DevOps Server/Services
- Migration from Azure DevOps Server to Azure DevOps Services (SaaS)
- Process Template Customization and Deployment model
- Migration, Upgrade, Monitor, and Maintenance of ADS Instance
- Automation using REST API to build Extensions and Custom Reporting
- Expert in all Modules of Azure DevOps Server/Service (Work Item, SCM/VC, Build, Release, Test, Reporting Management)"
- CICD Orchestration tools and other SCM/VC tools
- Microsoft MCSD Application Lifecycle Management certified
- A bachelor or master degree with a minimum of 6 years relevant work experience in Azure DevOps Server/Services (SaaS)
- Good communication skills
- Strong knowledge of application lifecycle workflows and processes involved in the design, development, deployment, test, and maintenance of software systems in the Windows environment
- Visual Studio and the .NET Framework experience is required "
- Administration and Support for Azure DevOps Server/Services
- Migration from Azure DevOps Server to Azure DevOps Services (SaaS)
- Process Template Customization and Deployment model
- Work with the user community to adopt new features, enable new use cases, and help resolve any issues
- Create customizations and tools to help support the team’s needs (PM, Dev, Test, & Ops)
- Take the lead in the validation of the application.
- Monitor the health of the solution and take proactive steps to ensure reliable availability and performance
- Manage patches and updates for tooling solutions and related hosting environments including the operating system
- Automate the process for Maintenance"






