Cutshort logo
Network architecture Jobs in Bangalore (Bengaluru)

11+ Network architecture Jobs in Bangalore (Bengaluru) | Network architecture Job openings in Bangalore (Bengaluru)

Apply to 11+ Network architecture Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Network architecture Job opportunities across top companies like Google, Amazon & Adobe.

icon
Vola Finance
Bengaluru (Bangalore)
3yrs+
Upto ₹20L / yr (Varies
)
skill iconAmazon Web Services (AWS)
Data engineering
Spark
SQL
Data Warehouse (DWH)
+4 more

Lightning Job By Cutshort⚡

 

As part of this feature, you can expect status updates about your application and replies within 72 hours (once the screening questions are answered)


Roles & Responsibilities


Basic Qualifications:

● The position requires a four-year degree from an accredited college or university.

● Three years of data engineering / AWS Architecture and security experience.


Top candidates will also have:

Proven/Strong understanding and/or experience in many of the following:-

● Experience designing Scalable AWS architecture.

● Ability to create modern data pipelines and data processing using AWS PAAS components (Glue, etc.) or open source tools (Spark, Hbase, Hive, etc.).

● Ability to develop SQL structures that support high volumes and scalability using

RDBMS such as SQL Server, MySQL, Aurora, etc.

● Ability to model and design modern data structures, SQL/NoSQL databases, Data Lakes, Cloud Data Warehouse

● Experience in creating Network Architecture for secured scalable solution.

● Experience with Message brokers such as Kinesis, Kafka, Rabbitmq, AWS SQS, AWS SNS, and Apache ActiveMQ. Hands-on experience on AWS serverless architectures such as Glue,Lamda, Redshift etc.

● Working knowledge of Load balancers, AWS shield, AWS guard, VPC, Subnets, Network gateway Route53 etc.

● Knowledge of building Disaster management systems and security logs notification system

● Knowledge of building scalable microservice architectures with AWS.

● To create a framework for monthly security checks and wide knowledge on AWS services

● Deploying software using CI/CD tools such CircleCI, Jenkins, etc.

● ML/ AI model deployment and production maintainanace experience is mandatory.

● Experience with API tools such as REST, Swagger, Postman and Assertible.

● Versioning management tools such as github, bitbucket, GitLab.

● Debugging and maintaining software in Linux or Unix platforms.

● Test driven development

● Experience building transactional databases.

● Python, PySpark programming experience .

● Must experience engineering solutions in AWS.

● Working AWS experience, AWS certification is required prior to hiring

● Working in Agile Framework/Kanban Framework

● Must demonstrate solid knowledge of computer science fundamentals like data structures & algorithms.

● Passion for technology and an eagerness to contribute to a team-oriented environment.

● Demonstrated leadership on medium to large-scale projects impacting strategic priorities.

● Bachelor’s degree in Computer science or Electrical engineering or related field is required

Read more
Gurugram, Bengaluru (Bangalore), Chennai
2 - 9 yrs
₹9L - ₹27L / yr
DevOps
Microsoft Windows Azure
gitlab
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+15 more
Greetings!!

We are looking out for a technically driven  "ML OPS Engineer" for one of our premium client

COMPANY DESCRIPTION:
This Company is a global management consulting firm. We are the trusted advisor to the world's leading businesses, governments, and institutions. We work with leading organizations across the private, public and social sectors. Our scale, scope, and knowledge allow us to address


Key Skills
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
Read more
Thoughtworks

at Thoughtworks

1 video
27 recruiters
Sunidhi Thakur
Posted by Sunidhi Thakur
Bengaluru (Bangalore)
10 - 13 yrs
Best in industry
Data modeling
PySpark
Data engineering
Big Data
Hadoop
+10 more

Lead Data Engineer

 

Data Engineers develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems. On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product. It could also be a software delivery project where you're equally happy coding and tech-leading the team to implement the solution.

 

Job responsibilities

 

·      You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems

·      You will partner with teammates to create complex data processing pipelines in order to solve our clients' most ambitious challenges

·      You will collaborate with Data Scientists in order to design scalable implementations of their models

·      You will pair to write clean and iterative code based on TDD

·      Leverage various continuous delivery practices to deploy, support and operate data pipelines

·      Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available

·      Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions

·      Create data models and speak to the tradeoffs of different modeling approaches

·      On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product

·      Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process

·      Assure effective collaboration between Thoughtworks' and the client's teams, encouraging open communication and advocating for shared outcomes

 

Job qualifications Technical skills

·      You are equally happy coding and leading a team to implement a solution

·      You have a track record of innovation and expertise in Data Engineering

·      You're passionate about craftsmanship and have applied your expertise across a range of industries and organizations

·      You have a deep understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop

·      You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting

·      Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions

·      You are comfortable taking data-driven approaches and applying data security strategy to solve business problems

·      You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments

·      Working with data excites you: you have created Big data architecture, you can build and operate data pipelines, and maintain data storage, all within distributed systems

 

Professional skills


·      Advocate your data engineering expertise to the broader tech community outside of Thoughtworks, speaking at conferences and acting as a mentor for more junior-level data engineers

·      You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives

·      An interest in coaching others, sharing your experience and knowledge with teammates

·      You enjoy influencing others and always advocate for technical excellence while being open to change when needed

Read more
Promilo
Karina Biswal
Posted by Karina Biswal
Bengaluru (Bangalore)
2 - 8 yrs
₹1.5L - ₹3L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+7 more

Designation: business analyst

Company name - promilo.com (sawara solutions pvt ltd)

Experience – 2 - 8 yrs.

Location: Bangalore

Mode – full time / work from office


About us:

Promilo is India’s 1st innovative platform which “pay to browse”

It is a b2b SaaS start-up that enables to accelerate  the business appointment funnel of the

Companies. We’re an SaaS based advertising platform that connects both users & advertisers. Users will be able to book an online appointment based on their interests with the advertiser, without compromising their data privacy and get rewarded for sharing their data and time. We’re registered and recognized by start-up India, start-up Karnataka & MSME companies. Also, the top 100 Google AppScale academy start-up


Job description:


We are looking for an experienced business analyst to join our team. The ideal candidate will have 2-8 years of experience in web & mobile user & client data analyst for start-ups, with a strong passion to help start-ups and a proven track record to bring the strong business insight to improve the sales, marketing, user, client, ui, ux of the organisation.


Responsibilities

  • Requirement gathering and analyzing
  • Conduct gap analysis, assess scope & suggest solutions
  • Responsible for technical proposal writing and time and cost analysis for web and mobile application development
  • Preparing rfp/rfq
  • Would be involved in presales activities
  • Work as liaison between client and technical team
  • Create wireframe | prototype | feature list | srs | brd & flow diagrams as per the client's requirement
  • High it literacy proven use of web and associated technologies (excel, power point, google apps).
  • Previous experience with data visualization tools (tableau, power bi, etc.), is strongly preferred.
  • Cleanse and curate sourced data into standardized reporting templates
  • Create, document, validate and ensure delivery of ad hoc, daily weekly, and monthly reports to internal stakeholders
  • Create, validate and deliver tracking links for the marketing department
  • Assist in the creation, qa, validation and reporting of a/b and multivariate tests
  • Proactively monitor the marketing kpis and ua data on a daily basis
  • Analyze marketing ua performance and conduct deep dive analysis to answer hypotheses and questions posed by the team
  • Gather, transform, and analyze digital marketing data, including paid media, search, social, website, and conversion funnel analytics
  • Analyze marketing data searching for top of funnel growth opportunities
  • Analyze product data searching for insights to increase app engagement, conversion, and retention
  • Analyze ltv/cac drivers to support overall business growth
  • Partner with marketing, product, and growth teams
  • Present findings to stakeholders and make recommendations for spend targets and campaign strategies
  • Pov on ios 14 and upcoming android privacy changes and we can navigate tracking in light of these changes
  • Pov on transition to skan 4.0
  • Working knowledge of statistical techniques (regression, k-means clustering, pca)
  • Experience with lift studies and marketing mix modeling working experience with python, r, & dbt
  • Experience at a small company
  • Experience with a subscription business
  • Analyze website and mobile app data on traffic sources and patterns. Provide insight on data trends and anomalies, making recommendations where appropriate to improve business performance.


Qualification

  • Master's or bachelor degree in computer science
  • Well-versed with its technologies
  • 2+ years of business analysis or project analysis experience
  • Tech-savvy with proficiency in Microsoft office, google apps, and other web and mobile applications
  • Excellent written and verbal communication skills
  • Self-motivated, flexible, and comfortable with a fast-paced startup environment
  • Advanced experience with Excel, google sheets including an understanding of visualizations, pivot tables, vlookup, and other key functions
  • Experience with adobe analytics, google analytics, tableau, SQL, and data grid is a plus.
  • Strong analytical and problem-solving skills, with clear attention to detail
  • Ability to prioritize and work under tight deadlines
  • Fast learner, able to master new concepts, theories, ideas, and processes with ease
  • Experience creating user acquisition reports and dashboards
  • Deep understanding of mobile attribution, cohort analysis, customer segmentation, and ltv modeling
  • Experience pulling data and creating databases using an API of at least one of these ad platforms; Facebook, Snapchat, TikTok, google ads, applovin
  • Experience with the architecture and deployment of mobile tracking solutions, including SDK integration, ad platforms APIs, server-postbacks, and mmp providers such as Appsflyer, adjust, kochava


If you are data driven individual with a passion for start-ups and have experience in business analytics, we encourage you to apply for this position. We offer a competitive salary package, flexible working hours, and a supportive work environment that fosters growth and development.


Read more
Chennai, Coimbatore, Bengaluru (Bangalore), Pune, Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Kochi (Cochin), Kolkata
6 - 9 yrs
₹6L - ₹10L / yr
Data Warehouse (DWH)
Informatica
ETL
skill iconAmazon Web Services (AWS)
SQL
+1 more
Designation: Informatica cloud(IICS)
Experience:
6-9 Years
Location:
Pan India
Job Description

 
Must have: Work experience in Informatica Intelligent Cloud Services and SQL, Data analyst.
 
Roles and responsibilities
 
Tracking/management of all software assets to include internal license assessments
Participate in software vendor contract/license negotiations and the development of software licenses and associated maintenance contracts.
Knowledge in SW license procurement & hands on experience in Ariba tool.
Prepare and assist in the performance of periodic compliance report
Provide support to end users regarding specific vendor product use rights
Assist in the establishment of internal and controls related to software asset management, governance and compliance
Exposure in Oracle, MS, IBM, Adobe and other enterprise licensing.
Handled Cloud – AWS, Amazon, GCP & Billing
Participated in license compliance audit.
Tracking/management of software license governance and compliance in accordance with enterprise policy, process, procedures and controls by internal staff and external service providers
Required :
             Working knowledge in Remedy, Service Now or any IT Asset tool platform
             IT Asset Management and Discovery Tools experience
             Experience interpreting licensing terms and conditions
             Conception knowledge of Information Technology
Read more
Fragma Data Systems

at Fragma Data Systems

8 recruiters
Evelyn Charles
Posted by Evelyn Charles
Remote, Bengaluru (Bangalore)
1 - 5 yrs
₹5L - ₹15L / yr
Spark
PySpark
Big Data
skill iconPython
SQL
+1 more
Must-Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skill
 
 
Technology Skills (Good to Have):
  • Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
  • Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. 
  • Designing and implementing data engineering, ingestion, and transformation functions
  • Azure Synapse or Azure SQL data warehouse
  • Spark on Azure is available in HD insights and data bricks
Read more
Servian

at Servian

2 recruiters
sakshi nigam
Posted by sakshi nigam
Bengaluru (Bangalore)
2 - 8 yrs
₹10L - ₹25L / yr
Data engineering
ETL
Data Warehouse (DWH)
Powershell
DA
+7 more
Who we are
 
We are a consultant led organisation. We invest heavily in our consultants to ensure they have the technical skills and commercial acumen to be successful in their work.
 
Our consultants have a passion for data and solving complex problems. They are curious, ambitious and experts in their fields. We have developed a first rate team so you will be supported and learn from the best

About the role

  • Collaborating with a team of like-minded and experienced engineers for Tier 1 customers, you will focus on data engineering on large complex data projects. Your work will have an impact on platforms that handle crores of customers and millions of transactions daily.

  • As an engineer, you will use the latest cloud services to design and develop reusable core components and frameworks to modernise data integrations in a cloud first world and own those integrations end to end working closely with business units. You will design and build for efficiency, reliability, security and scalability. As a consultant, you will help drive a data engineering culture and advocate best practices.

Mandatory experience

    • 1-6 years of relevant experience
    • Strong SQL skills and data literacy
    • Hands-on experience designing and developing data integrations, either in ETL tools, cloud native tools or in custom software
    • Proficiency in scripting and automation (e.g. PowerShell, Bash, Python)
    • Experience in an enterprise data environment
    • Strong communication skills

Desirable experience

    • Ability to work on data architecture, data models, data migration, integration and pipelines
    • Ability to work on data platform modernisation from on-premise to cloud-native
    • Proficiency in data security best practices
    • Stakeholder management experience
    • Positive attitude with the flexibility and ability to adapt to an ever-changing technology landscape
    • Desire to gain breadth and depth of technologies to support customer's vision and project objectives

What to expect if you join Servian?

    • Learning & Development: We invest heavily in our consultants and offer internal training weekly (both technical and non-technical alike!) and abide by a ‘You Pass We Pay” policy.
    • Career progression: We take a longer term view of every hire. We have a flat org structure and promote from within. Every hire is developed as a future leader and client adviser.
    • Variety of projects: As a consultant, you will have the opportunity to work across multiple projects across our client base significantly increasing your skills and exposure in the industry.
    • Great culture: Working on the latest Apple MacBook pro in our custom designed offices in the heart of leafy Jayanagar, we provide a peaceful and productive work environment close to shops, parks and metro station.
    • Professional development: We invest heavily in professional development both technically, through training and guided certification pathways, and in consulting, through workshops in client engagement and communication. Growth in our organisation happens from the growth of our people.
Read more
Prescience Decision Solutions
Shivakumar K
Posted by Shivakumar K
Bengaluru (Bangalore)
3 - 7 yrs
₹10L - ₹20L / yr
Big Data
ETL
Spark
Apache Kafka
Apache Spark
+4 more

The Data Engineer would be responsible for selecting and integrating Big Data tools and frameworks required. Would implement Data Ingestion & ETL/ELT processes

Required Experience, Skills and Qualifications:

  • Hands on experience on Big Data tools/technologies like Spark,  Databricks, Map Reduce, Hive, HDFS.
  • Expertise and excellent understanding of big data toolset such as Sqoop, Spark-streaming, Kafka, NiFi
  • Proficiency in any of the programming language: Python/ Scala/  Java with 4+ years’ experience
  • Experience in Cloud infrastructures like MS Azure, Data lake etc
  • Good working knowledge in NoSQL DB (Mongo, HBase, Casandra)
Read more
Yottaasys AI LLC

at Yottaasys AI LLC

5 recruiters
Dinesh Krishnan
Posted by Dinesh Krishnan
Bengaluru (Bangalore), Singapore
2 - 5 yrs
₹9L - ₹20L / yr
skill iconData Science
skill iconDeep Learning
skill iconR Programming
skill iconPython
skill iconMachine Learning (ML)
+2 more
We are a US Headquartered Product Company looking to Hire a few Passionate Deep Learning and Computer Vision Team Players with 2-5 years of experience! If you are any of these:
1. Expert in deep learning and machine learning techniques,
2. Extremely Good in image/video processing,
3. Have a Good understanding of Linear algebra, Optimization techniques, Statistics and pattern recognition.
Then u r the right fit for this position.
Read more
Nanonets

at Nanonets

2 candid answers
1 product
Neil Shroff
Posted by Neil Shroff
Remote, Mumbai, Bengaluru (Bangalore)
3 - 10 yrs
$25K - $50K / yr
skill iconDeep Learning
TensorFlow
skill iconMachine Learning (ML)
skill iconPython

We are looking for an engineer with ML/DL background.


Ideal candidate should have the following skillset

1) Python
2) Tensorflow
3) Experience building and deploying systems
4) Experience with Theano/Torch/Caffe/Keras all useful
5) Experience Data warehousing/storage/management would be a plus
6) Experience writing production software would be a plus
7) Ideal candidate should have developed their own DL architechtures apart from using open source architechtures.
8) Ideal candidate would have extensive experience with computer vision applications


Candidates would be responsible for building Deep Learning models to solve specific problems. Workflow would look as follows:

1) Define Problem Statement (input -> output)
2) Preprocess Data
3) Build DL model
4) Test on different datasets using Transfer Learning
5) Parameter Tuning
6) Deployment to production


Candidate should have experience working on Deep Learning with an engineering degree from a top tier institute (preferably IIT/BITS or equivalent)

Read more
GreedyGame

at GreedyGame

1 video
5 recruiters
Debdutta Pal
Posted by Debdutta Pal
Bengaluru (Bangalore)
2 - 4 yrs
₹10L - ₹12L / yr
skill iconPython
MySQL
skill iconData Science
NOSQL Databases
Greedygame is looking for a data scientist who will help us make sense of the vast amount of available data in order to make smarter decisions and develop high-quality products. Your primary focus will be using data mining techniques, statistical analysis, machine learning, in order to build high-quality prediction systems and strong consumer engagement profiles. Responsibilities Build required statistical models and heuristics to predict, optimize, and guide various aspects of our business based on available data Interact with product and operations teams to identify gaps, questions, and issues for data analysis and experiment Develop and code software programs, algorithms and create automated processes which cleanse,integrate and evaluate large datasets from multiple sources Create systems to use data from user behavior to identify actionable insights. Convey these insights to product and operations teams from time to time. Help in redefining ad viewing experience for consumers on a global scale Skills Required Coding experience in Python, MySQL, NoSQL and building prototypes for algorithms. Comfortable and willing to learn any machine learning algorithm, reading research papers and delving deep into its maths Passionate and curious to learn the latest trends, methods and technologies in this field. What’s in it for you? - Opportunity to be a part of the big disruption we are creating in the ad-tech space. - Work with complete autonomy, and take on multiple responsibilities - Work in a fast paced environment, with uncapped opportunities to learn and grow - Office in one of the most happening places in India. - Amazing colleagues, weekly lunches and beer on fridays! What we are building: GreedyGame is a platform which enables blending of ads within mobile gaming experience using assets like background, characters, power-ups. It helps advertisers engage audiences while they are playing games, empowers game developers monetize their game development efforts through non-intrusive advertising and allows gamers to enjoy gaming content without having to deal with distractive advertising.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort