11+ NLB Jobs in India
Apply to 11+ NLB Jobs on CutShort.io. Find your next job, effortlessly. Browse NLB Jobs and apply today!
Designation: Specialist - Cloud Service Developer (ABL_SS_600)
Position description:
- The person would be primary responsible for developing solutions using AWS services. Ex: Fargate, Lambda, ECS, ALB, NLB, S3 etc.
- Apply advanced troubleshooting techniques to provide Solutions to issues pertaining to Service Availability, Performance, and Resiliency
- Monitor & Optimize the performance using AWS dashboards and logs
- Partner with Engineering leaders and peers in delivering technology solutions that meet the business requirements
- Work with the cloud team in agile approach and develop cost optimized solutions
Primary Responsibilities:
- Develop solutions using AWS services includiing Fargate, Lambda, ECS, ALB, NLB, S3 etc.
Reporting Team
- Reporting Designation: Head - Big Data Engineering and Cloud Development (ABL_SS_414)
- Reporting Department: Application Development (2487)
Required Skills:
- AWS certification would be preferred
- Good understanding in Monitoring (Cloudwatch, alarms, logs, custom metrics, Trust SNS configuration)
- Good experience with Fargate, Lambda, ECS, ALB, NLB, S3, Glue, Aurora and other AWS services.
- Preferred to have Knowledge on Storage (S3, Life cycle management, Event configuration)
- Good in data structure, programming in (pyspark / python / golang / Scala)
Job Description: Data Engineer
We are looking for a curious Data Engineer to join our extremely fast-growing Tech Team at StanPlus
About RED.Health (Formerly Stanplus Technologies)
Get to know the team:
Join our team and help us build the world’s fastest and most reliable emergency response system using cutting-edge technology.
Because every second counts in an emergency, we are building systems and flows with 4 9s of reliability to ensure that our technology is always there when people need it the most. We are looking for distributed systems experts who can help us perfect the architecture behind our key design principles: scalability, reliability, programmability, and resiliency. Our system features a powerful dispatch engine that connects emergency service providers with patients in real-time
.
Key Responsibilities
● Build Data ETL Pipelines
● Develop data set processes
● Strong analytic skills related to working with unstructured datasets
● Evaluate business needs and objectives
● Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery
● Interpret trends and patterns
● Work with data and analytics experts to strive for greater functionality in our data system
● Build algorithms and prototypes
● Explore ways to enhance data quality and reliability
● Work with the Executive, Product, Data, and D esign teams, to assist with data-related technical issues and support their data infrastructure needs.
● Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
Key Requirements
● Proven experience as a data engineer, software developer, or similar of at least 3 years.
● Bachelor's / Master’s degree in data engineering, big data analytics, computer engineering, or related field.
● Experience with big data tools: Hadoop, Spark, Kafka, etc.
● Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
● Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
● Experience with Azure, AWS cloud services: EC2, EMR, RDS, Redshift
● Experience with BigQuery
● Experience with stream-processing systems: Storm, Spark-Streaming, etc.
● Experience with languages: Python, Java, C++, Scala, SQL, R, etc.
● Good hands-on with Hive, Presto.
We are 17-year-old Multinational Company headquartered in Ba
About The Company
The client is 17-year-old Multinational Company headquartered in Bangalore, Whitefield, and having another delivery center in Pune, Hinjewadi. It also has offices in US and Germany and are working with several OEM’s and Product Companies in about 12 countries and is a 200+ strong team worldwide.
The Role
Power BI front-end developer in the Data Domain (Manufacturing, Sales & Marketing, Purchasing, Logistics, …).Responsible for the Power BI front-end design, development, and delivery of highly visible data-driven applications in the Compressor Technique. You always take a quality-first approach where you ensure the data is visualized in a clear, accurate, and user-friendly manner. You always ensure standards and best practices are followed and ensure documentation is created and maintained. Where needed, you take initiative and make
recommendations to drive improvements. In this role you will also be involved in the tracking, monitoring and performance analysis
of production issues and the implementation of bugfixes and enhancements.
Skills & Experience
• The ideal candidate has a degree in Computer Science, Information Technology or equal through experience.
• Strong knowledge on BI development principles, time intelligence, functions, dimensional modeling and data visualization is required.
• Advanced knowledge and 5-10 years experience with professional BI development & data visualization is preferred.
• You are familiar with data warehouse concepts.
• Knowledge on MS Azure (data lake, databricks, SQL) is considered as a plus.
• Experience and knowledge on scripting languages such as PowerShell and Python to setup and automate Power BI platform related activities is an asset.
• Good knowledge (oral and written) of English is required.
- A Natural Language Processing (NLP) expert with strong computer science fundamentals and experience in working with deep learning frameworks. You will be working at the cutting edge of NLP and Machine Learning.
Roles and Responsibilities
- Work as part of a distributed team to research, build and deploy Machine Learning models for NLP.
- Mentor and coach other team members
- Evaluate the performance of NLP models and ideate on how they can be improved
- Support internal and external NLP-facing APIs
- Keep up to date on current research around NLP, Machine Learning and Deep Learning
Mandatory Requirements
- Any graduation with at least 2 years of demonstrated experience as a Data Scientist.
Behavioral Skills
Strong analytical and problem-solving capabilities.
- Proven ability to multi-task and deliver results within tight time frames
- Must have strong verbal and written communication skills
- Strong listening skills and eagerness to learn
- Strong attention to detail and the ability to work efficiently in a team as well as individually
Technical Skills
Hands-on experience with
- NLP
- Deep Learning
- Machine Learning
- Python
- Bert
Preferred Requirements
- Experience in Computer Vision is preferred
Work closely with different Front Office and Support Function stakeholders including but not restricted to Business
Management, Accounts, Regulatory Reporting, Operations, Risk, Compliance, HR on all data collection and reporting use cases.
Collaborate with Business and Technology teams to understand enterprise data, create an innovative narrative to explain, engage and enlighten regular staff members as well as executive leadership with data-driven storytelling
Solve data consumption and visualization through data as a service distribution model
Articulate findings clearly and concisely for different target use cases, including through presentations, design solutions, visualizations
Perform Adhoc / automated report generation tasks using Power BI, Oracle BI, Informatica
Perform data access/transfer and ETL automation tasks using Python, SQL, OLAP / OLTP, RESTful APIs, and IT tools (CFT, MQ-Series, Control-M, etc.)
Provide support and maintain the availability of BI applications irrespective of the hosting location
Resolve issues escalated from Business and Functional areas on data quality, accuracy, and availability, provide incident-related communications promptly
Work with strict deadlines on high priority regulatory reports
Serve as a liaison between business and technology to ensure that data related business requirements for protecting sensitive data are clearly defined, communicated, and well understood, and considered as part of operational
prioritization and planning
To work for APAC Chief Data Office and coordinate with a fully decentralized team across different locations in APAC and global HQ (Paris).
General Skills:
Excellent knowledge of RDBMS and hands-on experience with complex SQL is a must, some experience in NoSQL and Big Data Technologies like Hive and Spark would be a plus
Experience with industrialized reporting on BI tools like PowerBI, Informatica
Knowledge of data related industry best practices in the highly regulated CIB industry, experience with regulatory report generation for financial institutions
Knowledge of industry-leading data access, data security, Master Data, and Reference Data Management, and establishing data lineage
5+ years experience on Data Visualization / Business Intelligence / ETL developer roles
Ability to multi-task and manage various projects simultaneously
Attention to detail
Ability to present to Senior Management, ExCo; excellent written and verbal communication skills
PriceLabs (https://www.chicagobusiness.com/innovators/what-if-you-could-adjust-prices-meet-demand" target="_blank">chicagobusiness.com/innovators/what-if-you-could-adjust-prices-meet-demand) is a cloud based software for vacation and short term rentals to help them dynamically manage prices just the way large hotels and airlines do! Our mission is to help small businesses in the travel and tourism industry by giving them access to advanced analytical systems that are often restricted to large companies.
We're looking for someone with strong analytical capabilities who wants to understand how our current architecture and algorithms work, and help us design and develop long lasting solutions to address those. Depending on the needs of the day, the role will come with a good mix of team-work, following our best practices, introducing us to industry best practices, independent thinking, and ownership of your work.
Responsibilities:
- Design, develop and enhance our pricing algorithms to enable new capabilities.
- Process, analyze, model, and visualize findings from our market level supply and demand data.
- Build and enhance internal and customer facing dashboards to better track metrics and trends that help customers use PriceLabs in a better way.
- Take ownership of product ideas and design discussions.
- Occasional travel to conferences to interact with prospective users and partners, and learn where the industry is headed.
Requirements:
- Bachelors, Masters or Ph. D. in Operations Research, Industrial Engineering, Statistics, Computer Science or other quantitative/engineering fields.
- Strong understanding of analysis of algorithms, data structures and statistics.
- Solid programming experience. Including being able to quickly prototype an idea and test it out.
- Strong communication skills, including the ability and willingness to explain complicated algorithms and concepts in simple terms.
- Experience with relational databases and strong knowledge of SQL.
- Experience building data heavy analytical models in the travel industry.
- Experience in the vacation rental industry.
- Experience developing dynamic pricing models.
- Prior experience working at a fast paced environment.
- Willingness to wear many hats.
at Lincode Labs India Pvt Ltd
This position is not for freshers. We are looking for candidates with AI/ML/CV experience of at least 4 year in the industry.
Roles & Responsibilities
- Proven experience with deploying and tuning Open Source components into enterprise ready production tooling Experience with datacentre (Metal as a Service – MAAS) and cloud deployment technologies (AWS or GCP Architect certificates required)
- Deep understanding of Linux from kernel mechanisms through user space management
- Experience on CI/CD (Continuous Integrations and Deployment) system solutions (Jenkins).
- Using Monitoring tools (local and on public cloud platforms) Nagios, Prometheus, Sensu, ELK, Cloud Watch, Splunk, New Relic etc. to trigger instant alerts, reports and dashboards. Work closely with the development and infrastructure teams to analyze and design solutions with four nines (99.99%) up-time, globally distributed, clustered, production and non-production virtualized infrastructure.
- Wide understanding of IP networking as well as data centre infrastructure
Skills
- Expert with software development tools and sourcecode management, understanding, managing issues, code changes and grouping them into deployment releases in a stable and measurable way to maximize production Must be expert at developing and using ansible roles and configuring deployment templates with jinja2.
- Solid understanding of data collection tools like Flume, Filebeat, Metricbeat, JMX Exporter agents.
- Extensive experience operating and tuning the kafka streaming data platform, specifically as a message queue for big data processing
- Strong understanding and must have experience:
- Apache spark framework, specifically spark core and spark streaming,
- Orchestration platforms, mesos and kubernetes,
- Data storage platforms, elasticstack, carbon, clickhouse, cassandra, ceph, hdfs
- Core presentation technologies kibana, and grafana.
- Excellent scripting and programming skills (bash, python, java, go, rust). Must have previous experience with “rust” in order to support, improve in house developed products
Certification
Red Hat Certified Architect certificate or equivalent required CCNA certificate required 3-5 years of experience running open source big data platforms
Job Description
Role requires experience in AWS and also programming experience in Python and Spark
Roles & Responsibilities
You Will:
- Translate functional requirements into technical design
- Interact with clients and internal stakeholders to understand the data and platform requirements in detail and determine core cloud services needed to fulfil the technical design
- Design, Develop and Deliver data integration interfaces in the AWS
- Design, Develop and Deliver data provisioning interfaces to fulfil consumption needs
- Deliver data models on Cloud platform, it could be on AWS Redshift, SQL.
- Design, Develop and Deliver data integration interfaces at scale using Python / Spark
- Automate core activities to minimize the delivery lead times and improve the overall quality
- Optimize platform cost by selecting right platform services and architecting the solution in a cost-effective manner
- Manage code and deploy DevOps and CI CD processes
- Deploy logging and monitoring across the different integration points for critical alerts
You Have:
- Minimum 5 years of software development experience
- Bachelor's and/or Master’s degree in computer science
- Strong Consulting skills in data management including data governance, data quality, security, data integration, processing and provisioning
- Delivered data management projects in any of the AWS
- Translated complex analytical requirements into technical design including data models, ETLs and Dashboards / Reports
- Experience deploying dashboards and self-service analytics solutions on both relational and non-relational databases
- Experience with different computing paradigms in databases such as In-Memory, Distributed, Massively Parallel Processing
- Successfully delivered large scale data management initiatives covering Plan, Design, Build and Deploy phases leveraging different delivery methodologies including Agile
- Strong knowledge of continuous integration, static code analysis and test-driven development
- Experience in delivering projects in a highly collaborative delivery model with teams at onsite and offshore
- Must have Excellent analytical and problem-solving skills
- Delivered change management initiatives focused on driving data platforms adoption across the enterprise
- Strong verbal and written communications skills are a must, as well as the ability to work effectively across internal and external organizations
at Bigdatamatica Solutions Pvt Ltd
Top MNC looking for candidates on Business Analytics(4-8 Years Experience).
Requirement :
- Experience in metric development & Business analytics
- High Data Skill Proficiency/Statistical Skills
- Tools: R, SQL, Python, Advanced Excel
- Good verbal/communication Skills
- Supply Chain domain knowledge
*Job Summary*
Duration: 6months contract based at Hyderabad
Availability: 1 week/Immediate
Qualification: Graduate/PG from Reputed University
*Key Skills*
R, SQL, Advanced Excel, Python
*Required Experience and Qualifications*
5 to 8 years of Business Analytics experience.