7+ Optimization Jobs in Bangalore (Bengaluru) | Optimization Job openings in Bangalore (Bengaluru)
Apply to 7+ Optimization Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Optimization Job opportunities across top companies like Google, Amazon & Adobe.
Review Criteria:
- Strong Dremio / Lakehouse Data Architect profile
- 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
- Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
- Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
- Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
- Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
- Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
- Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
- Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies
Preferred:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments
Role & Responsibilities:
You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
Ideal Candidate:
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
Preferred:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
- Exposure to Snowflake, Databricks, or BigQuery environments.
- Experience in high-tech, manufacturing, or enterprise data modernization programs.
Job Role: Optimization Engineer - C Programming
Experience: 3 to 8 Years
Location: Bangalore, Pune, Delhi
Were hiring an Optimization Engineer skilled in C Programming and Operations Research / Optimization to design and optimize algorithms solving complex business and engineering problems.
Key Responsibilities:
- Develop and maintain high-performance software using C.
- Build and implement optimization models (linear, integer, nonlinear).
- Collaborate with teams to deliver scalable, efficient solutions.
- Analyze and improve existing algorithms for performance and scalability.
Must-Have Skills:
- Expertise in C Programming and Operations Research / Optimization.
- Strong in data structures, algorithms, and memory management.
- Hands-on with tools like CPLEX, Gurobi, or COIN-OR.
- Python experience is an added advantage.
Preferred Skills:
- Knowledge of Python, C++, or Java.
- Familiarity with AMPL, GAMS, or solver APIs.
- Understanding of HPC, parallel computing, or multi-threading.
Position: Sybase Developer
Location: Bangalore /Pune
Exp.Level: 8+ Yrs
Proven experience as a. Sybase Developer
Familiarity with Oracle, Sybase, Performance tuning and Optimization
Required Skills: Sybase, Performance tuning and Optimization
Roles and Responsibilities:
- Perform day-to-day management of campaigns, including oversight of bid recommendations, execution details, and budgets.
- Conduct thorough keyword analysis by: creating keyword lists, analyzing search volume, and making recommendations.
- Experiment with new platforms and channels.
- Ensure ads are creative and displayed appropriately.
- Monitor costs and return on investment (ROI).
- End to End campaign management & setup.
- Continual testing and optimization.
Skills and Qualifications:
- Bachelor’s degree required.
- 1-2 years of experience in Google Campaigns.
- Relevant previous experience and success within a similar PPC or Digital Marketing focused role.
- Google Ads and Google Analytics certified.
- Current knowledge of Google Analytics and other digital marketing software and tools.
- Strong analytical ability and experience to work with a variety of data sources.
- Strong Excel experience.
- Ability to communicate effectively.
-
Working with Ruby, Python, Perl, and Java
-
Troubleshooting and having working knowledge of various tools, open-source technologies, and cloud services.
-
Configuring and managing databases and cache layers such as MySQL, Mongo, Elasticsearch, Redis
-
Setting up all databases and for optimisations (sharding, replication, shell scripting etc)
-
Creating user, Domain handling, Service handling, Backup management, Port management, SSL services
-
Planning, testing & development of IT Infrastructure ( Server configuration and Database) and handling the technical issue related to server Docker and VM optimization
-
Demonstrate awareness of DB management, server related work, Elasticsearch.
-
Selecting and deploying appropriate CI/CD tools
-
Striving for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
-
Experience working on Linux based infrastructure
-
Awareness of critical concepts in DevOps and Agile principles
-
6-8 years of experience
About Graphene
Graphene is a Singapore Head quartered AI company which has been recognized as Singapore’s Best Start Up By Switzerland’s Seedstarsworld, and also been awarded as best AI platform for healthcare in Vivatech Paris. Graphene India is also a member of the exclusive NASSCOM Deeptech club. We are
developing an AI platform which is disrupting and replacing traditional Market Research with unbiased insights with a focus on healthcare, consumer goods and financial services.
Graphene was founded by Corporate leaders from Microsoft and P&G and works closely with the Singapore Government & Universities in creating cutting edge technology which is gaining traction with many Fortune 500 companies in India, Asia and USA.
Graphene’s culture is grounded in delivering customer delight by recruiting high potential talent and providing an intense learning and collaborative atmosphere, with many ex-employees now hired by large companies across the world.
Graphene has a 6-year track record of delivering financially sustainable growth and is one of the rare start-ups which is self-funded and is yet profitable and debt free. We have already created a strong bench strength of Singaporean leaders and are recruiting and grooming more talent with a focus on our US expansion.
Job title: Cloud Admin Responsibilities and Duties
Cloud Systems Administrator position responsibilities includes:
- Configure and fine tune cloud infrastructure
- Install and configure virtual cloud
- Support cloud servers including security configurations, patching, and
- Establish Virtual Private Networks (VPNs) to customer
- Develop scripts for automating client/server
- Monitoring, automating processes & recovery
- SQL server and noSQL dB configuration, maintenance and administration
- Network admin and configuration
Qualifications and Skills:
- Engineering degree in Computer Science or equivalent
- Minimum of 4 years with Linux/Windows server administration
- Proficient with MS SQL Server administration, Cosmos DB
- Experience with supporting application and database servers
- Strong experience with IIS, other web servers
- Strong experience with Security – Identity, Access Control and Data Protection
- Strong experience with Microsoft Azure
- DevOps skills – Automated build and release management experience
- Resource and Cost Optimization
- Experience with Docker and Kubernetes. Certifications like CKAD a plus
- Certification on Azure Administration (AZ – 104) a plus
General Attributes
- Should have the passion to work for the organization thereby growing with the
- A good team player as well as be able to contribute individually as and when
- Should be able to mentor juniors if any need
- Should be able to learn any new technology if the project
- Leadership Capability, Self-motivated, highly collaborative, working through uncertainties and unknowns, Exceptional analytical and problem-solving skills, Getting things done, Continuous learning




