11+ OLTP Jobs in Bangalore (Bengaluru) | OLTP Job openings in Bangalore (Bengaluru)
Apply to 11+ OLTP Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest OLTP Job opportunities across top companies like Google, Amazon & Adobe.
SaaS Company strive to make selling fun with our SaaS incen
What is the role?
You will be responsible for developing and designing front-end web architecture, ensuring the responsiveness of applications, and working alongside graphic designers for web design features, among other duties. You will be responsible for the functional/technical track of the project
Key Responsibilities
- Develop and automate large-scale, high-performance data processing systems (batch and/or streaming).
- Build high-quality software engineering practices towards building data infrastructure and pipelines at scale.
- Lead data engineering projects to ensure pipelines are reliable, efficient, testable, & maintainable
- Optimize performance to meet high throughput and scale
What are we looking for?
- 4+ years of relevant industry experience.
- Working with data at the terabyte scale.
- Experience designing, building and operating robust distributed systems.
- Experience designing and deploying high throughput and low latency systems with reliable monitoring and logging practices.
- Building and leading teams.
- Working knowledge of relational databases like Postgresql/MySQL.
- Experience with Python / Spark / Kafka / Celery
- Experience working with OLTP and OLAP systems
- Excellent communication skills, both written and verbal.
- Experience working in cloud e.g., AWS, Azure or GCP
Whom will you work with?
You will work with a top-notch tech team, working closely with the architect and engineering head.
What can you look for?
A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being at this company
We are
We strive to make selling fun with our SaaS incentive gamification product. Company is the #1 gamification software that automates and digitizes Sales Contests and Commission Programs. With game-like elements, rewards, recognitions, and complete access to relevant information, Company turbocharges an entire salesforce. Company also empowers Sales Managers with easy-to-publish game templates, leaderboards, and analytics to help accelerate performances and sustain growth.
We are a fun and high-energy team, with people from diverse backgrounds - united under the passion of getting things done. Rest assured that you shall get complete autonomy in your tasks and ample opportunities to develop your strengths.
Way forward
If you find this role exciting and want to join us in Bangalore, India, then apply by clicking below. Provide your details and upload your resume. All received resumes will be screened, shortlisted candidates will be requested to join for a discussion and on mutual alignment and agreement, we will proceed with hiring.
Specific Responsibilities
- Minimum of 2 years Experience in Google Big Query and Google Cloud Platform.
- Design and develop the ETL framework using BigQuery
- Expertise in Big Query concepts like Nested Queries, Clustering, Partitioning, etc.
- Working Experience of Clickstream database, Google Analytics/ Adobe Analytics.
- Should be able to automate the data load from Big Query using APIs or scripting language.
- Good experience in Advanced SQL concepts.
- Good experience with Adobe launch Web, Mobile & e-commerce tag implementation.
- Identify complex fuzzy problems, break them down in smaller parts, and implement creative, data-driven solutions
- Responsible for defining, analyzing, and communicating key metrics and business trends to the management teams
- Identify opportunities to improve conversion & user experience through data. Influence product & feature roadmaps.
- Must have a passion for data quality and be constantly looking to improve the system. Drive data-driven decision making through the stakeholders & drive Change Management
- Understand requirements to translate business problems & technical problems into analytics problems.
- Effective storyboarding and presentation of the solution to the client and leadership.
- Client engagement & management
- Ability to interface effectively with multiple levels of management and functional disciplines.
- Assist in developing/coaching individuals technically as well as on soft skills during the project and as part of Client Project’s training program.
- 2 to 3 years of working experience in Google Big Query & Google Cloud Platform
- Relevant experience in Consumer Tech/CPG/Retail industries
- Bachelor’s in engineering, Computer Science, Math, Statistics or related discipline
- Strong problem solving and web analytical skills. Acute attention to detail.
- Experience in analyzing large, complex, multi-dimensional data sets.
- Experience in one or more roles in an online eCommerce or online support environment.
- Expertise in Google Big Query & Google Cloud Platform
- Experience in Advanced SQL, Scripting language (Python/R)
- Hands-on experience in BI tools (Tableau, Power BI)
- Working Experience & understanding of Adobe Analytics or Google Analytics
- Experience in creating and debugging website & app tracking (Omnibus, Dataslayer, GA debugger, etc.)
- Excellent analytical thinking, analysis, and problem-solving skills.
- Knowledge of other GCP services is a plus
About Us:
Established in the early months of 2016, Sentienz is an IoT, AI, and big data technology company that you can join. A dynamic team of highly skilled data scientists and engineers who specialize in building internet-scale platforms and petabyte-scale digital insights solutions. We are experts in developing state-of-the-art machine learning models as well as advanced analytics platforms.
Sentienz is proud of its flagship product, Sentienz Akiro, an AI-powered connectivity platform. By allowing users to communicate easily across their devices, Akiro revolutionizes consumer engagement. Moreover, it offers essential feedback on customer involvement within your app. In addition to these factors, Akiro enables IoT M2M communication by providing unmatched real-time access.
If you want to be part of the future of AI-driven connectivity and IoT innovation at Sentienz, then join us today!
Position: Senior Data Engineer
Job Specifications:
- 5+ years of experience in data engineering or a similar role.
- Proficiency in programming languages such as Scala, Java, and Python.
- Experience with big data technologies such as Spark.
- Strong SQL skills and experience with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra).
- Hands-on experience with data integration tools like Treeno and Airbyte.
- Experience with scheduling and workflow automation tools such as Dolphin Scheduler.
- Familiarity with data visualization tools like Superset.
- Hands-on experience with cloud platforms (e.g., AWS, Google Cloud, Azure) and their data services.
- Bachelor/master’s degree in computer science or a related field.
- Excellent problem-solving skills, attention to detail, and good communication skills.
- Candidates must be fast learners and adaptable to a fast-paced development environment.
Location: On-site; Bangalore/Bangalore Urban
Availability to Join: Immediate Joiners
Roles and Responsibilities:
- Design, develop, and maintain the end-to-end MLOps infrastructure from the ground up, leveraging open-source systems across the entire MLOps landscape.
- Creating pipelines for data ingestion, data transformation, building, testing, and deploying machine learning models, as well as monitoring and maintaining the performance of these models in production.
- Managing the MLOps stack, including version control systems, continuous integration and deployment tools, containerization, orchestration, and monitoring systems.
- Ensure that the MLOps stack is scalable, reliable, and secure.
Skills Required:
- 3-6 years of MLOps experience
- Preferably worked in the startup ecosystem
Primary Skills:
- Experience with E2E MLOps systems like ClearML, Kubeflow, MLFlow etc.
- Technical expertise in MLOps: Should have a deep understanding of the MLOps landscape and be able to leverage open-source systems to build scalable, reliable, and secure MLOps infrastructure.
- Programming skills: Proficient in at least one programming language, such as Python, and have experience with data science libraries, such as TensorFlow, PyTorch, or Scikit-learn.
- DevOps experience: Should have experience with DevOps tools and practices, such as Git, Docker, Kubernetes, and Jenkins.
Secondary Skills:
- Version Control Systems (VCS) tools like Git and Subversion
- Containerization technologies like Docker and Kubernetes
- Cloud Platforms like AWS, Azure, and Google Cloud Platform
- Data Preparation and Management tools like Apache Spark, Apache Hadoop, and SQL databases like PostgreSQL and MySQL
- Machine Learning Frameworks like TensorFlow, PyTorch, and Scikit-learn
- Monitoring and Logging tools like Prometheus, Grafana, and Elasticsearch
- Continuous Integration and Continuous Deployment (CI/CD) tools like Jenkins, GitLab CI, and CircleCI
- Explain ability and Interpretability tools like LIME and SHAP
About Us:
Small businesses are the backbone of the US economy, comprising almost half of the GDP and the private workforce. Yet, big banks don’t provide the access, assistance and modern tools that owners need to successfully grow their business.
We started Novo to challenge the status quo—we’re on a mission to increase the GDP of the modern entrepreneur by creating the go-to banking platform for small businesses (SMBs). Novo is flipping the script of the banking world, and we’re excited to lead the small business banking revolution.
At Novo, we’re here to help entrepreneurs, freelancers, startups and SMBs achieve their financial goals by empowering them with an operating system that makes business banking as easy as iOS. We developed modern bank accounts and tools to help to save time and increase cash flow. Our unique product integrations enable easy access to tracking payments, transferring money internationally, managing business transactions and more. We’ve made a big impact in a short amount of time, helping thousands of organizations access powerfully simple business banking.
We are looking for a Senior Data Scientist who is enthusiastic about using data and technology to solve complex business problems. If you're passionate about leading and helping to architect and develop thoughtful data solutions, then we want to chat. Are you ready to revolutionize the small business banking industry with us?
About the Role: (specific to the role-- describe the role activities/duties, who they interact with, what they are accountable for, how the role operates in the team, department and organization)
- Build and manage predictive models focussed on credit risk, fraud, conversions, churn, consumer behaviour etc
- Provides best practices, direction for data analytics and business decision making across multiple projects and functional areas
- Implements performance optimizations and best practices for scalable data models, pipelines and modelling
- Resolve blockers and help the team stay productive
- Take part in building the team and iterating on hiring processes
Requirements for the Role: (these are specific to the role-- technical skills and requirements to fulfill the job duties, certifications, years of experience, degree)
- 4+ years of experience in data science roles focussed on managing data processes, modelling and dashboarding
- Strong experience in python, SQL and in-depth understanding of modelling techniques
- Experience working with Pandas, scikit learn, visualization libraries like plotly, bokeh etc.
- Prior experience with credit risk modelling will be preferred
- Deep Knowledge of Python to write scripts to manipulate data and generate automated reports
How We Define Success: (these are specific to the role-- should be tied to performance management, OKRs or general goals)
- Expand access to data driven decision making across the organization
- Solve problems in risk, marketing, growth, customer behaviour through analytics models that increase efficacy
Nice To Have, but Not Required:
- Experience in dashboarding libraries like Python Dash and exposure to CI/CD
- Exposure to big data tools like Spark, and some core tech knowledge around API’s, data streaming etc.
Novo values diversity as a core tenant of the work we do and the businesses we serve. We are an equal opportunity employer, indiscriminate of race, religion, ethnicity, national origin, citizenship, gender, gender identity, sexual orientation, age, veteran status, disability, genetic information or any other protected characteristic.
Develop complex queries, pipelines and software programs to solve analytics and data mining problems
Interact with other data scientists, product managers, and engineers to understand business problems, technical requirements to deliver predictive and smart data solutions
Prototype new applications or data systems
Lead data investigations to troubleshoot data issues that arise along the data pipelines
Collaborate with different product owners to incorporate data science solutions
Maintain and improve data science platform
Must Have
BS/MS/PhD in Computer Science, Electrical Engineering or related disciplines
Strong fundamentals: data structures, algorithms, database
5+ years of software industry experience with 2+ years in analytics, data mining, and/or data warehouse
Fluency with Python
Experience developing web services using REST approaches.
Proficiency with SQL/Unix/Shell
Experience in DevOps (CI/CD, Docker, Kubernetes)
Self-driven, challenge-loving, detail oriented, teamwork spirit, excellent communication skills, ability to multi-task and manage expectations
Preferred
Industry experience with big data processing technologies such as Spark and Kafka
Experience with machine learning algorithms and/or R a plus
Experience in Java/Scala a plus
Experience with any MPP analytics engines like Vertica
Experience with data integration tools like Pentaho/SAP Analytics Cloud
- We are looking for a Data Engineer with 3-5 years experience in Python, SQL, AWS (EC2, S3, Elastic Beanstalk, API Gateway), and Java.
- The applicant must be able to perform Data Mapping (data type conversion, schema harmonization) using Python, SQL, and Java.
- The applicant must be familiar with and have programmed ETL interfaces (OAUTH, REST API, ODBC) using the same languages.
- The company is looking for someone who shows an eagerness to learn and who asks concise questions when communicating with teammates.
Data Platform engineering at Uber is looking for a strong Technical Lead (Level 5a Engineer) who has built high quality platforms and services that can operate at scale. 5a Engineer at Uber exhibits following qualities:
- Demonstrate tech expertise › Demonstrate technical skills to go very deep or broad in solving classes of problems or creating broadly leverageable solutions.
- Execute large scale projects › Define, plan and execute complex and impactful projects. You communicate the vision to peers and stakeholders.
- Collaborate across teams › Domain resource to engineers outside your team and help them leverage the right solutions. Facilitate technical discussions and drive to a consensus.
- Coach engineers › Coach and mentor less experienced engineers and deeply invest in their learning and success. You give and solicit feedback, both positive and negative, to others you work with to help improve the entire team.
- Tech leadership › Lead the effort to define the best practices in your immediate team, and help the broader organization establish better technical or business processes.
What You’ll Do
- Build a scalable, reliable, operable and performant data analytics platform for Uber’s engineers, data scientists, products and operations teams.
- Work alongside the pioneers of big data systems such as Hive, Yarn, Spark, Presto, Kafka, Flink to build out a highly reliable, performant, easy to use software system for Uber’s planet scale of data.
- Become proficient of multi-tenancy, resource isolation, abuse prevention, self-serve debuggability aspects of a high performant, large scale, service while building these capabilities for Uber's engineers and operation folks.
What You’ll Need
- 7+ years experience in building large scale products, data platforms, distributed systems in a high caliber environment.
- Architecture: Identify and solve major architectural problems by going deep in your field or broad across different teams. Extend, improve, or, when needed, build solutions to address architectural gaps or technical debt.
- Software Engineering/Programming: Create frameworks and abstractions that are reliable and reusable. advanced knowledge of at least one programming language, and are happy to learn more. Our core languages are Java, Python, Go, and Scala.
- Data Engineering: Expertise in one of the big data analytics technologies we currently use such as Apache Hadoop (HDFS and YARN), Apache Hive, Impala, Drill, Spark, Tez, Presto, Calcite, Parquet, Arrow etc. Under the hood experience with similar systems such as Vertica, Apache Impala, Drill, Google Borg, Google BigQuery, Amazon EMR, Amazon RedShift, Docker, Kubernetes, Mesos etc.
- Execution & Results: You tackle large technical projects/problems that are not clearly defined. You anticipate roadblocks and have strategies to de-risk timelines. You orchestrate work that spans multiple teams and keep your stakeholders informed.
- A team player: You believe that you can achieve more on a team that the whole is greater than the sum of its parts. You rely on others’ candid feedback for continuous improvement.
- Business acumen: You understand requirements beyond the written word. Whether you’re working on an API used by other developers, an internal tool consumed by our operation teams, or a feature used by millions of customers, your attention to details leads to a delightful user experience.
Role and Responsibilities
- Build a low latency serving layer that powers DataWeave's Dashboards, Reports, and Analytics functionality
- Build robust RESTful APIs that serve data and insights to DataWeave and other products
- Design user interaction workflows on our products and integrating them with data APIs
- Help stabilize and scale our existing systems. Help design the next generation systems.
- Scale our back end data and analytics pipeline to handle increasingly large amounts of data.
- Work closely with the Head of Products and UX designers to understand the product vision and design philosophy
- Lead/be a part of all major tech decisions. Bring in best practices. Mentor younger team members and interns.
- Constantly think scale, think automation. Measure everything. Optimize proactively.
- Be a tech thought leader. Add passion and vibrance to the team. Push the envelope.
Skills and Requirements
- 8- 15 years of experience building and scaling APIs and web applications.
- Experience building and managing large scale data/analytics systems.
- Have a strong grasp of CS fundamentals and excellent problem solving abilities. Have a good understanding of software design principles and architectural best practices.
- Be passionate about writing code and have experience coding in multiple languages, including at least one scripting language, preferably Python.
- Be able to argue convincingly why feature X of language Y rocks/sucks, or why a certain design decision is right/wrong, and so on.
- Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.
- Have experience working with multiple storage and indexing technologies such as MySQL, Redis, MongoDB, Cassandra, Elastic.
- Good knowledge (including internals) of messaging systems such as Kafka and RabbitMQ.
- Use the command line like a pro. Be proficient in Git and other essential software development tools.
- Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
- Exposure to one or more centralized logging, monitoring, and instrumentation tools, such as Kibana, Graylog, StatsD, Datadog etc.
- Working knowledge of building websites and apps. Good understanding of integration complexities and dependencies.
- Working knowledge linux server administration as well as the AWS ecosystem is desirable.
- It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub.