11+ Oracle Developer Jobs in Pune | Oracle Developer Job openings in Pune
Apply to 11+ Oracle Developer Jobs in Pune on CutShort.io. Explore the latest Oracle Developer Job opportunities across top companies like Google, Amazon & Adobe.
Oracle OAS Developer
Senior OAS/OAC (Oracle analytics) designer and developer having 3+ years of experience. Worked on new Oracle Analytics platform. Used latest features, custom plug ins and design new one using Java. Has good understanding about the various graphs data points and usage for appropriate financial data display. Worked on performance tuning and build complex data security requirements.
Qualifications
Bachelor university degree in Engineering/Computer Science.
Additional information
Have knowledge of Financial and HR dashboard
A global business process management company
B1 – Data Scientist - Kofax Accredited Developers
Requirement – 3
Mandatory –
- Accreditation of Kofax KTA / KTM
- Experience in Kofax Total Agility Development – 2-3 years minimum
- Ability to develop and translate functional requirements to design
- Experience in requirement gathering, analysis, development, testing, documentation, version control, SDLC, Implementation and process orchestration
- Experience in Kofax Customization, writing Custom Workflow Agents, Custom Modules, Release Scripts
- Application development using Kofax and KTM modules
- Good/Advance understanding of Machine Learning /NLP/ Statistics
- Exposure to or understanding of RPA/OCR/Cognitive Capture tools like Appian/UI Path/Automation Anywhere etc
- Excellent communication skills and collaborative attitude
- Work with multiple teams and stakeholders within like Analytics, RPA, Technology and Project management teams
- Good understanding of compliance, data governance and risk control processes
Total Experience – 7-10 Years in BPO/KPO/ ITES/BFSI/Retail/Travel/Utilities/Service Industry
Good to have
- Previous experience of working on Agile & Hybrid delivery environment
- Knowledge of VB.Net, C#( C-Sharp ), SQL Server , Web services
Qualification -
- Masters in Statistics/Mathematics/Economics/Econometrics Or BE/B-Tech, MCA or MBA
at Concinnity Media Technologies
- Develop, train, and optimize machine learning models using Python, ML algorithms, deep learning frameworks (e.g., TensorFlow, PyTorch), and other relevant technologies.
- Implement MLOps best practices, including model deployment, monitoring, and versioning.
- Utilize Vertex AI, MLFlow, KubeFlow, TFX, and other relevant MLOps tools and frameworks to streamline the machine learning lifecycle.
- Collaborate with cross-functional teams to design and implement CI/CD pipelines for continuous integration and deployment using tools such as GitHub Actions, TeamCity, and similar platforms.
- Conduct research and stay up-to-date with the latest advancements in machine learning, deep learning, and MLOps technologies.
- Provide guidance and support to data scientists and software engineers on best practices for machine learning development and deployment.
- Assist in developing tooling strategies by evaluating various options, vendors, and product roadmaps to enhance the efficiency and effectiveness of our AI and data science initiatives.
You will work on:
We help many of our clients make sense of their large investments in data – be it building analytics solutions or machine learning applications. You will work on cutting-edge cloud-native technologies to crunch terabytes of data into meaningful insights.
What you will do (Responsibilities):
Collaborate with product management & engineering to build highly efficient data pipelines.
You will be responsible for:
- Dealing with large customer data and building highly efficient pipelines
- Building insights dashboards
- Troubleshooting data loss, data inconsistency, and other data-related issues
- Product development environment delivering stories in a scaled agile delivery methodology.
What you bring (Skills):
5+ years of experience in hands-on data engineering & large-scale distributed applications
- Extensive experience in object-oriented programming languages such as Java or Scala
- Extensive experience in RDBMS such as MySQL, Oracle, SQLServer, etc.
- Experience in functional programming languages such as JavaScript, Scala, or Python
- Experience in developing and deploying applications in Linux OS
- Experience in big data processing technologies such as Hadoop, Spark, Kafka, Databricks, etc.
- Experience in Cloud-based services such as Amazon AWS, Microsoft Azure, or Google Cloud Platform
- Experience with Scrum and/or other Agile development processes
- Strong analytical and problem-solving skills
Great if you know (Skills):
- Some exposure to containerization technologies such as Docker, Kubernetes, or Amazon ECS/EKS
- Some exposure to microservices frameworks such as Spring Boot, Eclipse Vert.x, etc.
- Some exposure to NoSQL data stores such as Couchbase, Solr, etc.
- Some exposure to Perl, or shell scripting.
- Ability to lead R&D and POC efforts
- Ability to learn new technologies on his/her own
- Team player with self-drive to work independently
- Strong communication and interpersonal skills
Advantage Cognologix:
- A higher degree of autonomy, startup culture & small teams
- Opportunities to become an expert in emerging technologies
- Remote working options for the right maturity level
- Competitive salary & family benefits
- Performance-based career advancement
About Cognologix:
Cognologix helps companies disrupt by reimagining their business models and innovate like a Startup. We are at the forefront of digital disruption and take a business-first approach to help meet our client’s strategic goals.
We are a Data focused organization helping our clients to deliver their next generation of products in the most efficient, modern, and cloud-native way.
- Health & Wellbeing
- Learn & Grow
- Evangelize
- Celebrate Achievements
- Financial Wellbeing
- Medical and Accidental cover.
- Flexible Working Hours.
- Sports Club & much more.
- KSQL
- Data Engineering spectrum (Java/Spark)
- Spark Scala / Kafka Streaming
- Confluent Kafka components
- Basic understanding of Hadoop
Senior Software Engineer
MUST HAVE:
POWER BI with PLSQL
experience: 5+ YEARS
cost: 18 LPA
WHF- HYBRID
Required Skills:
- Proven work experience as an Enterprise / Data / Analytics Architect - Data Platform in HANA XSA, XS, Data Intelligence and SDI
- Can work on new and existing architecture decision in HANA XSA, XS, Data Intelligence and SDI
- Well versed with data architecture principles, software / web application design, API design, UI / UX capabilities, XSA / Cloud foundry architecture
- In-depth understand of database structure (HANA in-memory) principles.
- In-depth understand of ETL solutions and data integration strategy.
- Excellent knowledge of Software and Application design, API, XSA, and microservices concepts
Roles & Responsibilities:
- Advise and ensure compliance of the defined Data Architecture principle.
- Identifies new technologies update and development tools including new release/upgrade/patch as required.
- Analyzes technical risks and advises on risk mitigation strategy.
- Advise and ensures compliance to existing and development required data and reporting standard including naming convention.
The time window is ideally AEST (8 am till 5 pm) which means starting at 3:30 am IST. We understand it can be very early for an SME supporting from India. Hence, we can consider the candidates who can support from at least 7 am IST (earlier is possible).
Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators.
Job Description
Experience : 7+ years
Location : Pune / Hyderabad
Skills :
- Drive and participate in requirements gathering workshops, estimation discussions, design meetings and status review meetings
- Participate and contribute in Solution Design and Solution Architecture for implementing Big Data Projects on-premise and on cloud
- Technical Hands on experience in design, coding, development and managing Large Hadoop implementation
- Proficient in SQL, Hive, PIG, Spark SQL, Shell Scripting, Kafka, Flume, Scoop with large Big Data and Data Warehousing projects with either Java, Python or Scala based Hadoop programming background
- Proficient with various development methodologies like waterfall, agile/scrum and iterative
- Good Interpersonal skills and excellent communication skills for US and UK based clients
About Us!
A global Leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.
We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.
Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.
We have our own products!
Eagle – Data warehouse Assessment & Migration Planning Product
Raven – Automated Workload Conversion Product
Pelican - Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.
Why join us!
Datametica is a place to innovate, bring new ideas to live and learn new things. We believe in building a culture of innovation, growth and belonging. Our people and their dedication over these years are the key factors in achieving our success.
Benefits we Provide!
Working with Highly Technical and Passionate, mission-driven people
Subsidized Meals & Snacks
Flexible Schedule
Approachable leadership
Access to various learning tools and programs
Pet Friendly
Certification Reimbursement Policy
Check out more about us on our website below!
www.datametica.com
Cloud infrastructure solutions and support company. (SE1)
- Design, create, test, and maintain data pipeline architecture in collaboration with the Data Architect.
- Build the infrastructure required for extraction, transformation, and loading of data from a wide variety of data sources using Java, SQL, and Big Data technologies.
- Support the translation of data needs into technical system requirements. Support in building complex queries required by the product teams.
- Build data pipelines that clean, transform, and aggregate data from disparate sources
- Develop, maintain and optimize ETLs to increase data accuracy, data stability, data availability, and pipeline performance.
- Engage with Product Management and Business to deploy and monitor products/services on cloud platforms.
- Stay up-to-date with advances in data persistence and big data technologies and run pilots to design the data architecture to scale with the increased data sets of consumer experience.
- Handle data integration, consolidation, and reconciliation activities for digital consumer / medical products.
Job Qualifications:
- Bachelor’s or master's degree in Computer Science, Information management, Statistics or related field
- 5+ years of experience in the Consumer or Healthcare industry in an analytical role with a focus on building on data pipelines, querying data, analyzing, and clearly presenting analyses to members of the data science team.
- Technical expertise with data models, data mining.
- Hands-on Knowledge of programming languages in Java, Python, R, and Scala.
- Strong knowledge in Big data tools like the snowflake, AWS Redshift, Hadoop, map-reduce, etc.
- Having knowledge in tools like AWS Glue, S3, AWS EMR, Streaming data pipelines, Kafka/Kinesis is desirable.
- Hands-on knowledge in SQL and No-SQL database design.
- Having knowledge in CI/CD for the building and hosting of the solutions.
- Having AWS certification is an added advantage.
- Having Strong knowledge in visualization tools like Tableau, QlikView is an added advantage
- A team player capable of working and integrating across cross-functional teams for implementing project requirements. Experience in technical requirements gathering and documentation.
- Ability to work effectively and independently in a fast-paced agile environment with tight deadlines
- A flexible, pragmatic, and collaborative team player with the innate ability to engage with data architects, analysts, and scientists
ears of Exp: 3-6+ Years
Skills: Scala, Python, Hive, Airflow, SparkLanguages: Java, Python, Shell Scripting
GCP: BigTable, DataProc, BigQuery, GCS, Pubsub
OR
AWS: Athena, Glue, EMR, S3, RedshiftMongoDB, MySQL, Kafka
Platforms: Cloudera / Hortonworks
AdTech domain experience is a plus.
Job Type - Full Time