Cutshort logo
IT security assessment Jobs in Bangalore (Bengaluru)

11+ IT security assessment Jobs in Bangalore (Bengaluru) | IT security assessment Job openings in Bangalore (Bengaluru)

Apply to 11+ IT security assessment Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest IT security assessment Job opportunities across top companies like Google, Amazon & Adobe.

icon
MNC
Bengaluru (Bangalore)
4 - 6 yrs
₹2L - ₹10L / yr
Requirement Analysis
IT security
Information security
IT security assessment
  • Threat and vulnerability analysis.
  • Investigating, documenting, and reporting on any information security (InfoSec) issues as well as emerging trends.
  • Analysis and response to previously unknown hardware and software vulnerabilities.
  • Preparing disaster recovery plans.

SOC analysts are considered the last line of defense and they usually work as part of a large security team, working alongside security managers and cybersecurity engineers. Typically, SOC analysts report to the company’s chief information security officer (CISO).

SOC analysts need to be detail oriented because they are responsible for monitoring many aspects simultaneously. They need to watch the protected network and respond to threats and events. The level of responsibility typically depends on the size of the organization.

Read more
Signdesk
Anandhu Krishna
Posted by Anandhu Krishna
Bengaluru (Bangalore)
3 - 5 yrs
₹10L - ₹15L / yr
ETL architecture
skill iconMongoDB
Business Intelligence (BI)
skill iconAmazon Web Services (AWS)
Snow flake schema

We are seeking a skilled AWS ETL/ELT Data Architect with a specialization in MongoDB to join our team. The ideal candidate will possess comprehensive knowledge and hands-on experience

in designing, implementing, and managing ETL/ELT processes within AWS while also demonstrating proficiency in MongoDB database management.

This role requires expertise in data architecture, AWS services, and MongoDB to optimize data solutions effectively.


Responsibilities:


● Design, architect, and implement ETL/ELT processes within AWS, integrating data from various sources into data lakes or warehouses, and utilising MongoDB as part of the data ecosystem.

● Collaborate cross-functionally to assess data requirements, analyze sources, and strategize effective data integration within AWS environments, considering MongoDB's role in the architecture.

● Construct scalable and high-performance data pipelines within AWS while integrating MongoDB for optimal data storage, retrieval, and manipulation.

● Develop comprehensive documentation covering data architecture, flows, and the interplay between AWS services, MongoDB, and ETL/ELT processes from scratch.

● Perform thorough data profiling, validation, and troubleshooting, ensuring data accuracy, consistency, and integrity in conjunction with MongoDB management.

● Stay updated with AWS and MongoDB best practices, emerging technologies, and industry trends to propose innovative data solutions and implementations.

● Provide mentorship to junior team members and foster collaboration with stakeholders to deliver robust data solutions.

● Analyze data issues, identify and articulate the business impact of data problems

● Perform code reviews and ensure that all solutions are aligned with pre-defined architectural standards, guidelines, and best practices, and meet quality standards


Qualifications:


● Bachelor's or Master’s degree in Computer Science, Information Technology, or related field.

● Minimum 5 years of hands-on experience in ETL/ELT development, data architecture, or similar roles.

● Having implemented more than a minimum of 3-4 live projects in a similar field would be desirable.

● Expertise in designing and implementing AWS-based ETL/ELT processes using tools like AWS Glue, AWS Data Pipeline, etc.

Read more
A LEADING US BASED MNC

A LEADING US BASED MNC

Agency job
via Zeal Consultants by Zeal Consultants
Bengaluru (Bangalore), Hyderabad, Delhi, Gurugram
5 - 10 yrs
₹14L - ₹15L / yr
Google Cloud Platform (GCP)
Spark
PySpark
Apache Spark
"DATA STREAMING"

Data Engineering : Senior Engineer / Manager


As Senior Engineer/ Manager in Data Engineering, you will translate client requirements into technical design, and implement components for a data engineering solutions. Utilize a deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution.


Must Have skills :


1. GCP


2. Spark streaming : Live data streaming experience is desired.


3. Any 1 coding language: Java/Pyhton /Scala



Skills & Experience :


- Overall experience of MINIMUM 5+ years with Minimum 4 years of relevant experience in Big Data technologies


- Hands-on experience with the Hadoop stack - HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.


- Strong experience in at least of the programming language Java, Scala, Python. Java preferable


- Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc.


- Well-versed and working knowledge with data platform related services on GCP


- Bachelor's degree and year of work experience of 6 to 12 years or any combination of education, training and/or experience that demonstrates the ability to perform the duties of the position


Your Impact :


- Data Ingestion, Integration and Transformation


- Data Storage and Computation Frameworks, Performance Optimizations


- Analytics & Visualizations


- Infrastructure & Cloud Computing


- Data Management Platforms


- Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time


- Build functionality for data analytics, search and aggregation

Read more
Leading technology and digital marketing company.(IC1)

Leading technology and digital marketing company.(IC1)

Agency job
via Multi Recruit by Chandra Kanth
Bengaluru (Bangalore)
3 - 7 yrs
₹12L - ₹14L / yr
PowerBI
Data analyst
skill iconData Analytics
Business Intelligence (BI)
Tableau
+2 more

We are looking for a Business Intelligence (BI)/Data Analyst to create and manage Power Bl and analytics solutions that turn data into knowledge.  In this role, you should have a background in data and business analysis. If you are self-directed, passionate about data,

and have business acumen and problem-solving aptitude, we'd like to meet you. Ultimately, you will enhance our business intelligence system to help us make better decisions.

 

Requirements and Qualifications

  • BSc/BA in Computer Science, Engineering, or relevant field.
  • Financial experience and Marketing background is a plus
  • Strong Power BI development   skills including Migration of existing deliverables to PowerBl.
  • Ability to work autonomously
  • Data modelling, Calculations, Conversions, Scheduling Data refreshes in Power-BI.
  • Proven experience as a Power BI Developer is a must.
  • Industry experience is preferred.  Familiarity with other BI tools (Tableau, QlikView).
  • Analytical mind with a problem-solving aptitude.

 

Responsibilities

  • Design, develop and maintain business intelligence solutions
  • Craft and execute queries upon request for data
  • Present information through reports and visualization based on requirements gathered from stakeholders
  • Interact with the team to gain an understanding of the business environment, technical context, and organizational strategic direction
  • Design, build and deploy new, and extend existing dashboards and reports that synthesize distributed data sources
  • Ensure data accuracy, performance, usability, and functionality requirements of BI platform
  • Manage data through MS Excel, Google sheets, and SQL applications, as required and support other analytics platforms
  • Develop and execute database queries and conduct analyses
  • Develop and update technical documentation requirements
  • Communicate   insights   to both   technical   and non-technical audiences.

 

 

 

 

Read more
Fintech Company

Fintech Company

Agency job
via Jobdost by Sathish Kumar
Bengaluru (Bangalore)
6 - 8 yrs
₹15L - ₹25L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark

Purpose of Job:
We are looking for an exceptionally talented Lead data engineer who has
exposure in implementing AWS services to build data pipelines, api
integration and designing data warehouse. Candidate with both hands-on
and leadership capabilities will be ideal for this position.

 

Job Responsibilities:
• Total 6+ years of experience as a Data Engineer and 2+
years of experience in managing a team
• Have minimum 3 years of AWS Cloud experience.
• Well versed in languages such as Python, PySpark, SQL, NodeJS etc
• Has extensive experience in Spark ecosystem and has
worked on both real time and batch processing
• Have experience in AWS Glue, EMR, DMS, Lambda, S3, DynamoDB, Step functions, Airflow, RDS, Aurora etc.
• Experience with modern Database systems such as
Redshift, Presto, Hive etc.
• Worked on building data lakes in the past on S3 or
Apache Hudi • Solid understanding of Data Warehousing Concepts
• Good to have experience on tools such as Kafka or Kinesis
• Good to have AWS Developer Associate or Solutions
Architect Associate Certification
• Have experience in managing a team


Qualifications:
At least a bachelor’s degree in Science, Engineering, Applied
Mathematics. Preferred Masters degree. Other Requirements: Team Management skills, Learning Attitude, Ownership skills

Read more
Prismforce

at Prismforce

1 recruiter
Jyoti Moily
Posted by Jyoti Moily
Bengaluru (Bangalore)
5 - 8 yrs
₹20L - ₹25L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm

Prismforce (www.prismforce.com) is a US Head quartered vertical SAAS product company , with development teams in India. We are Series-A funded venture , backed by Tier 1 VC and targeted towards tech/IT services industry and tech talent organizations in enterprises, solving their most critical sector specific problems in Talent Supply Chain. The product suite is powered by artificial intelligence designed to accelerate business impact e.g. improved profitability and agility , by digitizing core vertical workflows underserved by custom applications and typical ERP offerings.


We are looking for Data Scientists to build data products to be the core of SAAS company disrupting the Skill market.In this role you should be highly analytical with a keen understanding of Data, Machine Learning, Deep Learning, Analysis, Algorithms, Products, Maths, and Statistics. The hands-on individual would be playing multiple roles of being a Data Scientist , Data Engineer , Data Analysts , Efficient Coder and above all Problem Solver.

Location: Mumbai / Bangalore / Pune / Kolkata

Responsibilities:

  • Identify relevant data sources - a combination of data sources to make it useful.
  • Build the automation of the collection processes.
  • Pre-processing of structured and unstructured data.
  • Handle large amounts of information to create the input to analytical Models.
  • Build predictive models and machine-learning algorithms Innovate Machine-Learning , Deep-Learning algorithms.
  • Build Network graphs , NLP , Forecasting Models Building data pipelines for end-to-end solutions.
  • Propose solutions and strategies to business challenges. Collaborate with product development teams and communicate with the Senior Leadership teams.
  • Participate in Problem solving sessions

Requirements:

  • Bachelor's degree in a highly quantitative field (e.g. Computer Science , Engineering , Physics , Math , Operations Research , etc) or equivalent experience.
  • Extensive machine learning and algorithmic background with a deep level understanding of at least one of the following areas: supervised and unsupervised learning methods , reinforcement learning , deep learning , Bayesian inference , Network graphs , Natural Language Processing Analytical mind and business acumen
  • Strong math skills (e.g. statistics , algebra)
  • Problem-solving aptitude Excellent communication skills with ability to communicate technical information.
  • Fluency with at least one data science/analytics programming language (e.g. Python , R , Julia).
  • Start-up experience is a plus Ideally 5-8 years of advanced analytics experience in startups/marquee com


Read more
Aureus Tech Systems

at Aureus Tech Systems

3 recruiters
Naveen Yelleti
Posted by Naveen Yelleti
Kolkata, Hyderabad, Chennai, Bengaluru (Bangalore), Bhubaneswar, Visakhapatnam, Vijayawada, Trichur, Thiruvananthapuram, Mysore, Delhi, Noida, Gurugram, Nagpur
1 - 7 yrs
₹4L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+2 more

Skills and requirements

  • Experience analyzing complex and varied data in a commercial or academic setting.
  • Desire to solve new and complex problems every day.
  • Excellent ability to communicate scientific results to both technical and non-technical team members.


Desirable

  • A degree in a numerically focused discipline such as, Maths, Physics, Chemistry, Engineering or Biological Sciences..
  • Hands on experience on Python, Pyspark, SQL
  • Hands on experience on building End to End Data Pipelines.
  • Hands on Experience on Azure Data Factory, Azure Data Bricks, Data Lake - added advantage
  • Hands on Experience in building data pipelines.
  • Experience with Bigdata Tools, Hadoop, Hive, Sqoop, Spark, SparkSQL
  • Experience with SQL or NoSQL databases for the purposes of data retrieval and management.
  • Experience in data warehousing and business intelligence tools, techniques and technology, as well as experience in diving deep on data analysis or technical issues to come up with effective solutions.
  • BS degree in math, statistics, computer science or equivalent technical field.
  • Experience in data mining structured and unstructured data (SQL, ETL, data warehouse, Machine Learning etc.) in a business environment with large-scale, complex data sets.
  • Proven ability to look at solutions in unconventional ways. Sees opportunities to innovate and can lead the way.
  • Willing to learn and work on Data Science, ML, AI.
Read more
Kloud9 Technologies
Bengaluru (Bangalore)
3 - 6 yrs
₹5L - ₹20L / yr
skill iconAmazon Web Services (AWS)
Amazon EMR
EMR
Spark
PySpark
+9 more

About Kloud9:

 

Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.

 

Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.

 

At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.

 

Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.

 

We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.


What we are looking for:

● 3+ years’ experience developing Data & Analytic solutions

● Experience building data lake solutions leveraging one or more of the following AWS, EMR, S3, Hive& Spark

● Experience with relational SQL

● Experience with scripting languages such as Shell, Python

● Experience with source control tools such as GitHub and related dev process

● Experience with workflow scheduling tools such as Airflow

● In-depth knowledge of scalable cloud

● Has a passion for data solutions

● Strong understanding of data structures and algorithms

● Strong understanding of solution and technical design

● Has a strong problem-solving and analytical mindset

● Experience working with Agile Teams.

● Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders

● Able to quickly pick up new programming languages, technologies, and frameworks

● Bachelor’s Degree in computer science


Why Explore a Career at Kloud9:

 

With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers.

Read more
Sixt R&D

Sixt R&D

Agency job
Bengaluru (Bangalore)
5 - 8 yrs
₹11L - ₹14L / yr
SQL
skill iconPython
RESTful APIs
Business Intelligence (BI)
QuickSight
+6 more

Technical-Requirements: 

  • Bachelor's Degree in Computer Science or a related technical field, and solid years of relevant experience. 
  • A strong grasp of SQL/Presto and at least one scripting (Python, preferable) or programming language. 
  • Experience with an enterprise class BI tools and it's auditing along with automations using REST API's. 
  • Experience with reporting tools – QuickSight (preferred, at least 2 years hands on). 
  • Tableau/Looker (both or anyone would suffice with at least 5+ years of hands on). 
  • 5+ years of experience with and detailed knowledge of data warehouse technical architectures, data modelling, infrastructure components, ETL/ ELT and reporting/analytic tools and environments, data structures and hands-on SQL coding. 
  • 5+ years of demonstrated quantitative and qualitative business intelligence. 
  • Experience with significant product analysis based business impact. 
  • 4+ years of large IT project delivery for BI oriented projects using agile framework. 
  • 2+ years of working with very large data warehousing environment. 
  • Experience in designing and delivering cross functional custom reporting solutions. 
  • Excellent oral and written communication skills including the ability to communicate effectively with both technical and non-technical stakeholders. 
  • Proven ability to meet tight deadlines, multi-task, and prioritize workload. 
  • A work ethic based on a strong desire to exceed expectations. 
  • Strong analytical and challenge process skills.
Read more
Synapsica Technologies Pvt Ltd

at Synapsica Technologies Pvt Ltd

6 candid answers
1 video
Human Resources
Posted by Human Resources
Bengaluru (Bangalore)
4 - 8 yrs
₹20L - ₹45L / yr
SVM
OpenCV
skill iconMachine Learning (ML)
skill iconDeep Learning
Artificial Intelligence (AI)
+6 more

Introduction

http://www.synapsica.com/">Synapsica is a https://yourstory.com/2021/06/funding-alert-synapsica-healthcare-ivycap-ventures-endiya-partners/">series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis. 

Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting.  We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Here’s a small sample of what we’re building: https://www.youtube.com/watch?v=FR6a94Tqqls">https://www.youtube.com/watch?v=FR6a94Tqqls

 

Your Roles and Responsibilities

Synapsica is looking for a Principal AI Researcher to lead and drive AI based research and development efforts. Ideal candidate should have extensive experience in Computer Vision and AI Research, either through studies or industrial R&D projects and should be excited to work on advanced exploratory research and development projects in computer vision and machine learning to create the next generation of advanced radiology solutions.

The role involves computer vision tasks including development customization and training of Convolutional Neural Networks (CNNs); application of ML techniques (SVM, regression, clustering etc.), and traditional Image Processing (OpenCV, etc.). The role is research-focused and would involve going through and implementing existing research papers, deep dive of problem analysis, frequent review of results, generating new ideas, building new models from scratch, publishing papers, automating and optimizing key processes. The role will span from real-world data handling to the most advanced methods such as transfer learning, generative models, reinforcement learning, etc., with a focus on understanding quickly and experimenting even faster. Suitable candidate will collaborate closely both with the medical research team, software developers and AI research scientists. The candidate must be creative, ask questions, and be comfortable challenging the status quo. The position is based in our Bangalore office.

 

 

Primary Responsibilities

  • Interface between product managers and engineers to design, build, and deliver AI models and capabilities for our spine products.
  • Formulate and design AI capabilities of our stack with special focus on computer vision.
  • Strategize end-to-end model training flow including data annotation, model experiments, model optimizations, model deployment and relevant automations
  • Lead teams, engineers, and scientists to envision and build new research capabilities and ensure delivery of our product roadmap.
  • Organize regular reviews and discussions.
  • Keep the team up-to-date with latest industrial and research updates.
  • Publish research and clinical validation papers

 

Requirements

  • 6+ years of relevant experience in solving complex real-world problems at scale using computer vision-based deep learning.
  • Prior experience in leading and managing a team.
  • Strong problem-solving ability
  • Prior experience with Python, cuDNN, Tensorflow, PyTorch, Keras, Caffe (or similar Deep Learning frameworks).
  • Extensive understanding of computer vision/image processing applications like object classification, segmentation, object detection etc
  • Ability to write custom Convolutional Neural Network Architecture in Pytorch (or similar)
  • Background in publishing research papers and/or patents 
  • Computer Vision and AI Research background in medical domain will be a plus
  • Experience of GPU/DSP/other Multi-core architecture programming
  • Effective communication with other project members and project stakeholders
  • Detail-oriented, eager to learn, acquire new skills
  • Prior Project Management and Team Leadership experience
  • Ability to plan work and meet the deadline
Read more
A Fintech startup

A Fintech startup

Agency job
via Success Pact by Priya Sariyal
Remote, Bengaluru (Bangalore)
3 - 15 yrs
₹16L - ₹22L / yr
skill iconData Science
XGBoost
Retail banking
Random boosting
Gradient boosting
+2 more
  • 3-5yrs of practical DS experience working with varied data sets. Working with retail banking is preferred but not necessary.
  • Need to be strong in concepts of statistical modelling – particularly looking for practical knowledge learnt from work experience (should be able to give "rule of thumb" answers)
  • Strong problem solving skills and the ability to articulate really well.
  • Ideally, the data scientist should have interfaced with data engineering and model deployment teams to bring models / solutions to "live" in production.
  • Strong working knowledge of python ML stack is very important here.
  • Willing to work on diverse range of tasks in building ML related capability on the Corridor Platform as well as client work.
  • Someone with strong interest in data engineering aspect of ML is highly preferred, i.e. can play dual role of Data Scientist as well as someone who can code a module on our Corridor Platform writing robust code.

Structured ML techniques for candidates:

 

  1. GBM
  2. XgBoost
  3. Random Forest
  4. Neural Net
  5. Logistic Regression
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort